ronan_0.wav|I'm going to walk you through 10 quick tips for fine-tuning.|1 ronan_1.wav|For each of those, I'll point you to one or two trellis videos on YouTube and also point you to the right branch if you're working out of the trellis advanced fine-tuning repository.|1 ronan_2.wav|Tip number one is to start with a small model.|1 ronan_3.wav|I recommend starting with something like Lama 3 8B or Fi 3 Mini, and the reason is because fine-tuning is about experimentation, and you want to be able to try many things quickly.|1 ronan_4.wav|If you start off with Lama 3 8 or 70B, it's going to take you much more time in order to test out what's working and what's not.|1 ronan_5.wav|You can always start small and scale up later.|1 ronan_6.wav|The video I recommend here is memorization.|1 ronan_7.wav|This one I use a relatively small model, as I do in many of my fine tuning tutorials, just because it's quicker to learn fast.|1 ronan_8.wav|Tip number two is to use LoRa or QLoRa.|1 ronan_9.wav|I don't recommend starting off with full fine-tuning for a few reasons.|1 ronan_10.wav|First of all, LoRa and QLoRa allow you to start with fewer GPUs or a smaller GPU.|1 ronan_11.wav|That's going to make iteration faster.|1 ronan_12.wav|But for small data sets, the performance might even be better than full fine-tuning because full fine-tuning can tend to overfit.|1 ronan_13.wav|So I'd recommend even if you eventually want to do full fine-tuning, start off with LoRa or QLoRa and try to get it working before you want to spend more on GPU rental and more of your time.|1 ronan_14.wav|The video here if you want to pick out the right LoRa parameters is a live stream on how to pick LoRa parameters.|1 ronan_15.wav|And if you're working out of the Trellis repo, you can check out the Unsloth branch for the fastest fine-tuning on a single GPU using LoRa or QLoRa.|1 ronan_16.wav|Tip number three is to create 10 manual test questions.|1 ronan_17.wav|So you want to create 10 question answer pairs and use those to choose which base model is going to perform best.|1 ronan_18.wav|So just by running those on different base models, you can see which one is going to give you the best baseline for starting off your fine tuning.|1 ronan_19.wav|Then, after you do any training run, you want to run that manual test and just evaluate whether the model is doing well.|1 ronan_20.wav|This gives you probably a better sense than solely looking at the eval and training loss during the fine-tuning process.|1 ronan_21.wav|This is what I do in this memorization video as well.|1 ronan_23.wav|And you'll see in the memorization scripts, how I allow you to set up this manual dataset.|1 ronan_24.wav|That's also possible in the unsloth branch and the multi GPU branch, which I'll get to later.|1 ronan_25.wav|Tip number four is to create datasets manually.|1 ronan_26.wav|Yes, I know this is a bit of work, but I think it's underrated.|1 ronan_27.wav|When you manually curate a dataset like I did for the trellis function calling dataset, it lets you appreciate exactly which rows of data are needed to get the performance that you need.|1 ronan_28.wav|You can, of course, use Python and chat GPT to help automate some of this and generate rows, but I think the manual touch does allow you a better understanding, which will allow you to get performance faster.|1 ronan_29.wav|Here, you can check out the function calling v3 branch and also the unslot and multi-GPU branches of the advanced fine-tuning repo.|1 ronan_30.wav|Tip number five is to start off training with a small number of rows.|1 ronan_31.wav|In fact, I always run training first with just one row of data to check that my training pipeline is working correctly and I don't run out of memory.|1 ronan_32.wav|Then I'll move to training on 100 rows, then 1,000.|1 ronan_33.wav|And I'm checking all the time whether my performance is actually improving or whether just my dataset design is completely off.|1 ronan_34.wav|If you do want to automate a little more how you generate synthetic data sets, you can check out this video here on data set preparation with LLMs.|1 ronan_35.wav|Tip number six is always use a validation data set.|1 ronan_36.wav|If you don't have one, you can just split off 10 to 20% of your training data set.|1 ronan_37.wav|You want to be checking your training loss as you progress along the process.|1 ronan_38.wav|Make sure it's not too bumpy and your learning rate is not too high or your batch size or virtual batch size is too small.|1 ronan_39.wav|You also want to check your validation loss, and this should be monotonically decreasing in a smooth way.|1 ronan_40.wav|If it's ever upticking, that means you might be overfitting and you're training for too many epochs, or you may not have enough data.|1 ronan_41.wav|Here, I recommend the Trellis repo branches of Unsloth or MultiGPU.|1 ronan_42.wav|They each allow you to split off validation, split from your base training set.|1 ronan_43.wav|This is something you can also do easily using Hugging Face datasets if you check out their documentation.|1 ronan_44.wav|Tip number seven is to try to start training on just one GPU.|1 ronan_45.wav|Again, this allows you to iterate faster.|1 ronan_48.wav|Also, on one GPU, you can use unsloth, which gives you a 2x speedup.|1 ronan_49.wav|So that's quite beneficial if you can just focus on keeping things simple until you've at least got a training approach that's working well, and you're happy to then spend the time and money to scale up.|1 ronan_50.wav|Something I should mention as well is that you can waste a lot of time with installations and getting stuck in getting set up for fine tuning.|1 ronan_51.wav|One way around that is to use an image or a template that sets up your CUDA and PyTorch to a specific version.|1 ronan_52.wav|I've got a one-click template here for RunPod, and you can use that to consistently have the same environment on which to install the final packages you need for fine tuning.|1 ronan_53.wav|Tip number eight is to use weights and biases.|1 ronan_54.wav|This is a tool that allows you to track the losses and the rewards as you move through your training run.|1 ronan_55.wav|You can include this in a script with pip install wandb, then set the environment variable for wandb project to a project name.|1 ronan_56.wav|And this will create a folder basically within which you can have multiple runs of run name.|1 ronan_57.wav|And the way you set the run name is in the training arguments by passing in the run name.|1 ronan_58.wav|Here you would set the run name like one epoch and constant scheduler or whatever you want to call it.|1 ronan_59.wav|And you also need to set up report to wand b weights and biases.|1 ronan_60.wav|This is supported in the Onslaught and the multi GPU branches and also in many of the Jupyter notebooks that are throughout all the branches of the advanced fine tuning repo.|1 ronan_61.wav|Before I move to tips 8 and 9, I want to comment on scaling up.|1 ronan_62.wav|So I've talked about starting with a low number of rows, starting with LoRa or QLoRa, and starting with a small model.|1 ronan_63.wav|Well, here's the order you want to scale up in.|1 ronan_64.wav|Start by increasing the rows of data on a small model, then move QLoRa to LoRa.|1 ronan_65.wav|If you really want to try full fine-tuning, test it out on a small model and see if it really improves performance.|1 ronan_66.wav|Then as a very last step, you can think about moving to a larger model where it's going to take more time and money to get in that final result.|1 ronan_68.wav|If you want to understand the pros and cons of full fine tuning versus QLORA or LORA, take a look at this video.|1 ronan_69.wav|And if you want to understand the complexities of doing multi GPU training, check out multi GPU fine tuning.|1 ronan_71.wav|Tip number nine is to use unsupervised fine-tuning.|1 ronan_72.wav|This can be useful if you have a large dataset.|1 ronan_73.wav|I'm gonna say larger than 10,000 rows of data.|1 ronan_74.wav|Here, you'll need to use Python scripts in order to clean up, say, repeated characters or too much new lines.|1 ronan_75.wav|You can also use language models in order to clean up the dataset chunk by chunk.|1 ronan_76.wav|The video of relevance here is the Wikipedia video I made, where I first extract data from Wikipedia, clean it, and then use it for fine-tuning.|1 ronan_77.wav|Last of all my tip number 10 is to do preference fine-tuning.|1 ronan_78.wav|This is where you have a data set with chosen which are better or preferred responses and rejected which are the responses to the same prompts but are of lower quality.|1 ronan_79.wav|You might have a set of data like this if you have production data from customers or from a chatbot.|1 ronan_80.wav|You may have some conversational data that you consider of good quality.|1 ronan_81.wav|You may even have corrected or annotated versions of those conversations where you've improved the assistance responses.|1 ronan_82.wav|That's going to be ideal as your chosen dataset.|1 ronan_83.wav|And you can always generate a rejected or lower quality dataset just by putting the same prompts into a language model and seeing what generic response it comes back with.|1 ronan_84.wav|So this approach here called ORPO or Odds Ratio Preference Optimization, it allows you to do both SFT, supervised fine tuning, and preference fine tuning at once.|1 ronan_86.wav|Here I recommend the Orpo YouTube video, and there's also a branch by that name in Advanced Fine Tuning.|1 ronan_87.wav|Orpo is also supported in the Unslot branch, where there's a Python Jupyter Notebook and also just a Python.py script you can run.|1 ronan_88.wav|And Orpo is supported as an option in the Multi-GPU branch too.|1 ronan_89.wav|So to recap these 10 tips, start with a small model, use LoRa or QLoRa, not full fine tuning.|1 ronan_90.wav|Always create 10 manual test questions or maybe a few more.|1 ronan_91.wav|Remember that manual data sets are probably underrated.|1 ronan_92.wav|You can always get a little bit of help from Python or from chat GPT.|1 ronan_93.wav|Start training on a small number of rows, even just one row to test the pipeline, but then 100 and make sure it's having a good effect before you decide to scale up.|1 ronan_94.wav|Make sure you know that the data type and the data set that you've set up is actually the right one.|1 ronan_95.wav|Number six, always use a validation set.|1 ronan_96.wav|Just split one off from a training set if you don't have one.|1 ronan_97.wav|Number seven, try to just start training on one GPU.|1 ronan_98.wav|Number eight, use weights and biases for tracking.|1 ronan_99.wav|And when you're scaling from small to large, increase first the rows, then move to using more VRAM with LoRa instead of QLoRa or full fine tuning instead of LoRa.|1 ronan_100.wav|By the way, there's a factor of four roughly in VRAM difference between each of those.|1 ronan_101.wav|So LoRa is about four times QLoRa and full fine tuning is about four times|1 ronan_103.wav|And last of all, increase to a larger model size only at the very end of your training process when you think you have a pipeline that's working well.|1 ronan_104.wav|Then for advanced tips, consider doing unsupervised fine-tuning if you have a large amount of data, only if you have a large amount of data, I'd say.|1 ronan_105.wav|And last of all, you can consider preference fine-tuning, in which case I'd recommend using ORPL, which will do supervised fine-tuning and odds ratio preference optimization.|1 ronan_107.wav|Now, this approach here, I've talked about for language models, but it also works for video and speech or images, multimodal models.|1 ronan_108.wav|So you can check out this video here on multimodal text plus image, where I prepare a dataset and bring it through fine tuning.|1 ronan_109.wav|And likewise, for this speech to text model, where I prepare a dataset and bring it through fine tuning.|1 ronan_110.wav|There are specific repos for a multimodal, that's the vision,|1 ronan_111.wav|the Vision repository here, and there's a repo for transcription.|1 ronan_112.wav|And this LLMs repo is the advanced fine-tuning repo I've been talking to date in or up until now in this presentation.|1 ronan_113.wav|I've laid out here all of the playlists that are relevant depending on what you need.|1 ronan_114.wav|So there are four different sections, four playlists and four repositories that go with them.|1 ronan_115.wav|There's the LLM fine-tuning playlist, which is all about fine-tuning language models.|1 ronan_116.wav|Then there's a repo for that, Advanced Fine Tuning.|1 ronan_117.wav|There's the Vision playlist, which is for multimodal models, and a repo link.|1 ronan_118.wav|There's a video on transcription and a repo link.|1 ronan_119.wav|And then there are many videos on server setup.|1 ronan_121.wav|And so here is the link for this.|1 ronan_123.wav|And there are also scripts on function calling inference and speed test too.|1 ronan_124.wav|I'll talk a little more about those just at the end of this video.|1 ronan_126.wav|You can purchase that all together now as a bundle.|1 ronan_128.wav|The first repo is the advanced fine-tuning repo, and this is split into branches according to function.|1 ronan_129.wav|They are all listed here roughly in the order that they have been released.|1 ronan_130.wav|Now, a few of the branches that I'll highlight are, first of all, the Wikipedia branch, which is for unsupervised fine-tuning and data cleaning.|1 ronan_131.wav|If you do want to do ORPO, you have the ORPO branch here.|1 ronan_132.wav|And if you want to prepare data, you can do so with the help of a language model.|1 ronan_133.wav|This is done in the memorization branch, where you can set up some data generation based on PDF content.|1 ronan_134.wav|And likewise, if you go to the supervised fine tuning branch, there is also a script or multiple scripts for generating Q&A data from a base data set right there.|1 ronan_135.wav|Then there are two important branches here, unsloth and multi-GPU.|1 ronan_136.wav|The unsloth branch allows you to run fine tuning in either a notebook or as a Python script.|1 ronan_137.wav|Whereas the multi-GPU branch allows you to run Python scripts that will deploy multi-GPU training that's fully shared data parallel or distributed data parallel.|1 ronan_138.wav|Now I'll briefly show you each of those two main branches.|1 ronan_139.wav|So here we'll go into the unsloth branch.|1 ronan_140.wav|The way that you run training in this Unslot branch is by setting up the configuration in a config file.|1 ronan_141.wav|I've also got a config file that you can use here if you want to do some function calling fine tuning.|1 ronan_143.wav|Then when you want to train your model, you simply run train.py, or you can run it step by step in a Python Jupyter notebook.|1 ronan_144.wav|Now, the notebook is recommended if you want to go through the training the first time, you can see step by step what's happening and easily print out things at intermediate points.|1 ronan_145.wav|But when you've got your script honed, it can be a lot faster to run a Python script.|1 ronan_146.wav|And that's why I have made this script available, which you just run from the command line and it will go through everything within the training.|1 ronan_147.wav|Just to give you a sense of how you configure the training and test setup, you'll set a model slug.|1 ronan_148.wav|You will then set some parameters, like whether you want to fine tune in 4-bit, what data type you want to use, depending on your GPU.|1 ronan_149.wav|You can then choose a data set, say for function calling, or if you want to memorize some data, like on the rules of TouchRugby.|1 ronan_152.wav|Next you set up your training and validation splits.|1 ronan_153.wav|Here I've selected a main branch for training and I've selected the training split.|1 ronan_154.wav|You can fix a max number of rows here.|1 ronan_155.wav|This will save you time if you just want to download and run on say 100 rows instead of on a massive dataset.|1 ronan_156.wav|Now I spoke earlier about generating a validation set.|1 ronan_157.wav|You can either download from a split that's on Hugging Face called test or validation, but you can also generate a validation split from the train split.|1 ronan_158.wav|If you just set this to true, it will sequester 20% of the training data to use as validation.|1 ronan_159.wav|Next up is the LoRa configuration.|1 ronan_160.wav|You have all the regular LoRa parameters you'll see here.|1 ronan_161.wav|Check out the live stream video on choosing LoRa parameters if you want to know more.|1 ronan_162.wav|You can set LoRa or LoRa alpha and also rank stabilize LoRa, set that to true or false.|1 ronan_163.wav|Here you've got some Weights and Biases project configurations.|1 ronan_164.wav|You set the project name, and then for each run, you can use a different name here for running in Weights and Biases.|1 ronan_165.wav|You can set up your Hugging Face username.|1 ronan_166.wav|This will be used when pushing models to Hub.|1 ronan_167.wav|Now there's a more advanced technique here where you can decide to train on completions only.|1 ronan_168.wav|This means that you will only be considering the loss on the answer portion, not on the prompt or question portion.|1 ronan_170.wav|So you set the completions to true here.|1 ronan_171.wav|Sometimes I use this for function calling, fine tuning.|1 ronan_172.wav|And then you need to let the model know where your answer is starting.|1 ronan_173.wav|So in a Lama 3 model, the answer will start after assistant and header ID.|1 ronan_174.wav|In a Lama 2 model, it will start after inst.|1 ronan_175.wav|And then I think this is maybe a chat ML format.|1 ronan_176.wav|the answer will start after I am start assistant.|1 ronan_177.wav|So this allows the training loop to check within your prompt.|1 ronan_178.wav|It will check for where this start of the assistance answer is, and then it will only look at the loss after that point.|1 ronan_181.wav|Next, you can decide whether you want to use ORPO or not.|1 ronan_182.wav|By default, I've got that set to false.|1 ronan_183.wav|If you're using ORPO, you need a column that's called chosen and one called rejected.|1 ronan_184.wav|and you can set your max prompt length and then the beta.|1 ronan_185.wav|The beta basically weighs how much of the preference fine-tuning, what's the importance of that loss relative to the standard SFT loss.|1 ronan_186.wav|Remember, ORPO does two things in one.|1 ronan_187.wav|It does SFT and it does preference fine-tuning in one.|1 ronan_188.wav|So if you have this at 0.2, it's kind of the importance of the odds ratio is about 0.2 relative to the SFT loss.|1 ronan_189.wav|Last of all, you can push to hub, so you can set a target model name if you want to push to hub.|1 ronan_190.wav|So very quickly, if we take a look at the test script, this will simply load the model.|1 ronan_191.wav|So it will load all of your configurations.|1 ronan_192.wav|It will load the model here, a fast language model using Unsloth.|1 ronan_193.wav|It will set up the tokenizer, set up the chat template, load the data set, either from your manual data that's in the repo or from Hugging Face.|1 ronan_194.wav|and then it will run inference through all of those samples and print the results out to file.|1 ronan_195.wav|Just as an example, I can show you within test output, you'll see here a large number of tests that I have run.|1 ronan_197.wav|So here is some fine tuning on TouchRugby, and you'll see there is a prompt, a question, and it'll print out the correct response, and it'll also print out the generated response.|1 ronan_198.wav|And then you can just manually compare whether these answers are good or not.|1 ronan_199.wav|Now, just one other script I'll point out here, which is ViewModules.|1 ronan_200.wav|You can just run Python ViewModules if you want to see what modules are within the given model.|1 ronan_201.wav|This allows you to pick out which modules you might want to fine tune using LoRa.|1 ronan_202.wav|And that's pretty much it for the unsloth branch, which is recommended if you're going to fine tune on one GPU.|1 ronan_204.wav|It has a config file that you can set up.|1 ronan_205.wav|It has the test.py and the train.py file that will allow you to run testing and training.|1 ronan_206.wav|And I'll just briefly show you the config file.|1 ronan_207.wav|So at the start here, you'll see this parameter that's not in the unsloth branch.|1 ronan_208.wav|If you set it to auto, it will just do standard training.|1 ronan_209.wav|You can train on multiple GPUs, but it will be pipeline parallel, so not quite as efficient.|1 ronan_210.wav|However, you can set this to DDP for distributed data parallel, or you can set it to FSDP for fully sharded data parallel.|1 ronan_211.wav|Now, when you're doing that, you'll need to configure the multi-GPU setup.|1 ronan_212.wav|That can be done by running config, accelerate config, and you'll see the instructions if you head over to the multi-GPU branch for doing that.|1 ronan_213.wav|So this is the Advanced Fine Tuning repo, and you can find out more at trials.com forward slash advanced dash fine dash tuning.|1 ronan_214.wav|The next repo I'll briefly go through is the Advanced Vision repo.|1 ronan_215.wav|This does much of the same, but for multimodal text plus image models.|1 ronan_216.wav|It allows you to prepare your data and push it up to create a Hugging Face dataset.|1 ronan_217.wav|Then you can fine-tune Lava Edifix and Moondream models.|1 ronan_218.wav|You can do multimodal server setup with text generation inference.|1 ronan_219.wav|There's a one-click template for running an Edifix server, including on a custom model.|1 ronan_220.wav|And last of all, there is a script for fine-tuning multimodal text plus video models.|1 ronan_221.wav|This is basically a variation on text plus image models where you split the video into multiple images.|1 ronan_222.wav|The next repo is the Advanced Inference repo.|1 ronan_223.wav|This allows you to set up a server so that you can hit an endpoint for a custom model.|1 ronan_224.wav|You can do so on RunPod, Vast AI, or using a Llama CPP type server.|1 ronan_225.wav|There's also the option to deploy serverlessly using RunPod.|1 ronan_226.wav|This means that you can put a server that will only turn on when it's being queried.|1 ronan_227.wav|and will turn off after it's been queried.|1 ronan_228.wav|It's quite useful for batch jobs that are less time sensitive, because it means you're not paying for the server when it's not being used, and it will just turn on when you need it, which is going to save you cost.|1 ronan_230.wav|Then there are speed tests for single queries and multiple queries.|1 ronan_231.wav|Data extraction, if you want to extract JSON or YAML data from files, so you can input some text and have a language model extracted for you into JSON or YAML format.|1 ronan_233.wav|So the idea is to use a very fast and relatively small language model to pick out the right snippets and then include those snippets in the context of a more powerful model, like say GPT-4.|1 ronan_235.wav|Last of all, there's the advanced transcription repository.|1 ronan_236.wav|This one here allows you to generate data if you want to fine tune a whisper model and then do the fine tuning.|1 ronan_237.wav|And again, much of the 10 tips that I provided earlier are going to apply here for transcription.|1 ronan_238.wav|And that is it for my 10 tips on fine-tuning.|1 ronan_239.wav|If I've left anything out, please let me know below in the comments and I'll get back to you.|1 ronan_240.wav|In the meantime, if you want more information on Trellis resources, including free and paid, try out trellis.com.|1 ronan_241.wav|That's T-R-E-L-L-I-S.com forward slash about.|1