audio
audioduration (s) 5.22
512
| text
stringlengths 54
5.01k
|
---|---|
Hi everyone welcome back to another video in our channel Neural Hacks with Vasant. |
|
So in this video we'll be seeing about how to create your own local LLM. |
|
Okay, so we have chat GPT right? So we'll be having a similar interface like chat GPT. |
|
Like you know, you can ask your questions by loading your llm of your own so if you don't know we have already uh fine-tuned lama 2 on our own even that model should be suitable with this framework okay but unfortunately i don't have the gpu uh enough g GPU to load that kind of a heavy model okay so since this is just for teaching purpose I have just loaded a small model okay but you can also try with some bigger models I have tried with flantify large it did work and here i'm gonna show you uh a coding model okay so there is a version of santa coder of 1 billion okay so let me at least you just give me a minute i'm just going to open it gonna open it santa coder so here there is uh santa coder 1 billion right like thanks to tabby ml we have just taken the model from this person or else this organization okay so we'll be using this model and making a disclaimer we are quantizing it and then we are running it so we can't expect performance here okay so this is like i said just for teaching purpose so yep I'll first show you what will be having as our end result in this video and then I'll go you go through the steps for you all to achieve this okay and the setup we won't be doing it in life or rather through this video okay because it took a lot of time but i'll say what you need to do in detail okay so yeah let's see what it is trying to find an answer so yep finally it has generated okay so here you can see we can also have chartistry guys i'll say how it is possible okay when we are going through the setup i'll show you how but for a quantized model to run in a very low computational environment uh it did okay okay it is good like you know it was able to generate if n percent to equal to equal to zero return even n percent to not equal to zero return all okay so now this is fine uh what i'm gonna do is uh i'm gonna uh close this entirely and i'll show you how to run this from first okay so yep okay so yep now what i'll do is i'll show you the vs code okay so let me stop all of these this is fine and this is also fine okay so it will just terminate first uh i'll create a to-do file for you all here okay to do dot pxt okay so i'm running this in windows but here if you see this is the repo so this usually runs in linux okay so linux okay so the best way is to install it via docker okay so this is a library called text generation inference which is created by hugging face themselves okay so this library uh provides you uh an efficient way to inference it okay they have optimized it so like you can see right you have optimized architectures flantify is there galactica lama lama v2 mpt falcon star coder santa coder every model is there and also you can load any other model of your own you can see right here other architectures are supported on best effort on the basis of using AutoModel for COS11 or AutoModel for C2C11. |
|
So any of these generation models can be used. So these are some running scripts. |
|
But the main thing to notice is to use GPUs, you need to install NVIDIA container toolkit. |
|
We also recommend using NVIDIA drivers with cuda version 11.8 or higher okay so first you need to uh set up your system with uh so first you need to uh set up your system with just a minute. |
|
Sorry I was keeping the wrong screen. So like I said here is the text generation inference and here you have you are seeing right like they are using a rust background okay and here are the optimized architectures and they are supporting other models also like i said by using uh these methods okay so you can also load any of your models if it is in hugging case so like i said you need to install nvidia contain nvidia container toolkit okay but before that you need to set up nvidia uh cuda toolkit okay so for me i'm using a rtx 3070 ti so i'll be uh uh supported with nvidia also cuda toolkit version 11.8 okay so that can be installed and once that is done you need to install cuda and then library as well uh just to make sure that you are first first like you know you need to set up your system in such a way that if you run uh torch.cuda.is underscore available then it should return true okay you can just write import torch and then print torch.cuda.is underscore available i'll show you now okay for so the first step would be that i'll also give you some links on how to install it okay okay so this is an environment so if I do Python here import torch it will take some time okay starch dot Buddha dot is underscore available if it returns to then yeah the first So for this I'll just write it as ensure CUDA is set up in local. |
|
Okay for this you need to have CUDA toolkit installed. |
|
Okay and then you need code nn and then you'll also need to install visual studio community okay these all needs to be installed okay so that is very important so if that is done the next step would be to install this nvidia container toolkit because we'll be running everything in docker without docker also you can do but the installation will be so hard okay like you know you'll be having uh this kind of uh you need to uh like you know install rust you need to have a protoc zip okay so you need to install protobuf that is all there but with docker it is very simple to execute so you might think wasn't docker doesn't execute in windows that's why we'll be using wsf okay so windows subsystem linux will be using that so now i'm directly gonna refer you some video so i'm directly gonna refer you some so I'm directly gonna refer you some videos just give me a minute so here is it so this is video I used WSL 2 okay so this is a great video you can have a look just refer this video okay and if you do that you will be like you know done with the pre setup like you know what and all is needed for this uh text generation inference library that will be done okay so now after that what you need to do is uh you need to open your wsx shell okay and here uh they have given falcon 7b instruct right rather than falcon serving strict uh we'll be using uh tabi ml santa codec 1b you can uh you can just think like how to get this model name whichever model suits your gpu for example i have only eight gigs of uh gpu vram so i had a very small model like you know this is this was even slightly bigger okay like we had 2.25 gb and then uh once i loaded it with quantization still it took 5 gb okay so be careful when you choose your model so once that is done you can just execute here then you need to have a volume okay so where the data can be stored uh we are giving a directory there also once that is also done like you can see here um just give me a minute so just give me a minute so so we were executing it from here okay so here we have this command right so it is talker run GPUs all SHM size 1g I've also had this command in the S code itself so let's see that now okay so here this here we are running docker and I'm running pseudo common because because it has uh it requires it okay so without sudo you can't execute in wsf so sudo docker run gpus all shm size 1g port is 5000 okay and we have provided the volume as well and we are providing the model here model id is model we have already defined it and we are providing a value for quantize okay for quantize there is two values bits and bytes and gptq it uses both of these and we are going for bits and bytes okay we'll just execute this and it is asking my password i'm just entering it so it will take some time okay so let's leave that right now and then after that for this the execution here text generation inference okay two commands to execute So, we'll be needing a model, right? So model equal to your model, I just keep it as your model here. |
|
You can just provide your model. And then volume, you can just have it like this itself and followed by this sorry you need to install that sorry run that docker run command okay I'll add pseudo here and you can have any port here it was defaultly given as 8080 in the github repo but you can also have your own port like for example I'm running it in 5000 you can also run it in 3000 or 2000 okay and if it is required alone have this okay quantize bits and bytes okay so these are all there and once this is done you will have an execution like this it will take some time you you you may think how long should i wait it will take some time based on your computation okay so next we need to execute the chat ui okay this text generation inference is kind of your back end and chat ui will be your content okay for chat ui first you need will be your content okay for chat ui first you need mongodb okay mongodb instance is needed but if it is not set up also don't worry i'm also now going to provide you a command which is available in uh their repo itself hugging face chat ui okay for now i'll just have these links as well in your uh to do file okay so for uh installation of mongodb right you can uh sorry execution of mongodb you can just execute this itself okay um let's see this command uh how to execute this command okay so i'm just shifting into vs code now see guys I didn't even change anything I'm adding pseudo again so Mongo chat you I have already in executed it okay so you like you know I will get some error like it is already getting executed or something like that but you won't get anything okay i'm already executing it so i will get that but you won't get it okay so once this is done you're all set to go okay the next step um maybe we'll have these as steps as well three next generation inference setup uh for chat ui pre-requisite why i'm keeping it like that i'll show you okay again i'm just gonna copy this and have it here okay this time we'll be setting up chat ui itself in our local okay so once that is done first thing is doing a git clone okay git clone um i'll copy the repo url okay i hope you all know how to do that okay so git clone this kind of okay you can just execute it it will create a folder like this except to do that xt everything will be there and you might not not also find node models okay once this is done next you need to create a file called env dot env dot local okay so for this a sample what and all you would need is you need to give a mongodb url okay so here we are we are running it in 27017 port right you you saw it previously okay so we are running it in 27017 so we can just have it like this this is host and this is port number host url port number with mongodb as a prefix okay then you need to have a models list okay so i'll just give you this complete thing here okay I'll maybe even provide the dot env dot local file itself okay so so that you might not face any error what you can do is now is it running yes now it is running okay so finally you'll be having something like this we are running it in 5000 port right now so we are having it as 5000 here and you can have any port here like you know 3000 2000 whatever you are executing you can you should change here and you can provide your model parameters here okay so this is your dot ENV dot local file okay so that is there so next step will be to create this dot env dot local file and the contents i'll keep it in our github repository okay or me when I can dot ENV local so guys I've also provided you the file like you know whatever you need to paste there I have just given you here just copy this create a file called dot ENV dot local and paste the above contents okay so copy this like this itself the indentation is very important okay so i had some errors regarding that that's why i'm just directly giving you this file whatever it is there inside okay so once that is done paste the above content inside dot env dot local file okay so once this is done the next step would be for you to install npm okay so for this i'll again provide you some commands give me a minute yes so npm install okay so from this library only we'll be doing it okay nbm node version manager so we are having a version of 0.3 v 0.39.4 so first you need to execute this command okay since we are in uh linux like you know this is very easy so you need to execute this command okay since we are in Linux making oh this is very easy so you need to execute this command I so that is your second one for you to execute and then once that is done you can just execute nvm install node okay so that is here nvm install node and that once it is done you can just check your version like you know uh npm hyphen hyphen version or node hyphen hyphen version you will be seeing your version of npm and nvm you are using sorry node you are using okay so that will do your npm installation so once that is done and you have executed your text generation inference backend and MongoDB is set. |
|
So you can just open a bash or else even your WSL terminal and you can execute this command. |
|
Okay. So npm install and npm run dev. For you, it will take some time because i've installed my npm packages so it it will come faster for me but for you guys it will take some time you need not worry like it is taking so time so much time or you need not worry about that okay so now we'll execute this but before moving into that i would also like to show you that uh we are having this text generation inference back and right i'll show you how it is now okay we are running it in 5000 right so local host 5000 slash docs okay so now you can see this text generation inference is there here and you can just use this like you know if you go to slash generate and have some example inputs you can also provide it from here or else if you find like i'm not going to use this chat ui i'm going to have it in some different front end such as like maybe streamed it then you can do pip and start text generation and you can execute this code still it will provide you with the same output okay so that is uh text generation inference okay now let's let's move on with our chat UI okay I'm presenting the VS code right now shell will execute this okay it should take some time yep it's now running you need not worry about this vulnerabilities and all okay that is fine so guys if you see now we are having it available in our localhost 5173 port okay so I am now opening it in Chrome so yep now you can see the chat UI has come right but the interesting thing if you note is that if you click your history of chat will come and also you can rename it okay so let's say i have executed even odd next time if you check it will be even odd similar to chat gpt okay so yeah guys uh like i said just experiment with this with your setup uh but very important you need a gpu server okay to have such kind of local chat gpt you need a gpu server if you have something like uh 48 gigs of uh gpu vram like let's say an a6000 you will be able to execute any big model okay like when you quantize it you can execute it so yep we have our own local chat uh llm okay so i hope you all like this video guys so yep that's it for this video and uh yeah i almost forgot to tell like i told i'll tell you later we are having mongodb just to store this history okay we are having a history here like you will have some history here okay and like you can see you can change the theme here you can also do settings okay you can create a new chat and then do something here also that is also fine okay so yeah guys that's it for this video uh try to set up and if you face any issues just let me know in the comment section or else even in the telegram group and i'll be more than happy to help you out with your setup okay i'll see you one in the next video until then cheers |
README.md exists but content is empty.
Use the Edit dataset card button to edit it.
- Downloads last month
- 42