title
stringlengths 4
172
| link
stringlengths 27
86
| article
stringlengths 4
40.1k
|
---|---|---|
"Diffusers Image Fill" guide | https://hf.co/blog/OzzyGT/diffusers-image-fill |
<p>
This guide was an idea I had for a while but was asked by <a href="https://github.com/pietrobolcato" rel="nofollow">pietrobolcato</a> <a href="https://github.com/huggingface/diffusers/discussions/7482#discussioncomment-10529470" rel="nofollow">here</a> so finally made the decision to do it before it gets too old or I forget it.</p>
<p>So the basic idea is to do a simple object remover or fill a selected part of the image that you want to change, for this we will use a controlnet and some easy techniques.</p>
<p>To be able to do this, we need to use two key models, one is the <a href="https://github.com/xinsir6/ControlNetPlus" rel="nofollow">ControlNetPlus Promax</a> and the second one is to use the lighting models, in this case, since I want to do photorealism, I'll use <a href="https://huggingface.co/SG161222/RealVisXL_V5.0_Lightning">RealVis 5.0 Lighting</a>.</p>
<p>The controlnet is not part of the diffusers core, but the official repository has all the instructions to make it work, you'll need the <code>StableDiffusionXLControlNetUnionPipeline</code>.</p>
<p>I also set up a space as a PoC of this guide, for this, I did a custom pipeline with just what we need to make it work. You can test it <a href="https://huggingface.co/spaces/OzzyGT/diffusers-image-fill">here</a>, if you use the app locally you can see the cool effect on how the image generates and fills the mask.</p>
<p align="center">
<img src="https://huggingface.co/datasets/OzzyGT/testing-resources/resolve/main/diffusers_fill/2024-09-13_07-40.png" width="600"/>
</p>
<p>First we need an image, I downloaded some from <a href="https://unsplash.com/" rel="nofollow">unsplash.com</a>. Lets use a demo the car in the mountains, the original image is <a href="https://unsplash.com/photos/a-car-parked-on-a-dirt-road-near-a-mountain-OCQjiB4tG5c" rel="nofollow">here</a> which was taken by <a href="https://unsplash.com/@jeffersonsees" rel="nofollow">jeffersonsees</a></p>
<p>Since this guide uses a custom pipeline with a custom controlnet that is not part of the core, I can't post the full code or it will be to big, so I'll try to give the key parts of what's needed to make it work. Also I will simplify the process by assuming square images of 1024x1024 which is not ideal in a real world scenario, this should be adapted to be used with any image in any aspect ratio and resolution.</p>
<p>I'll use pillow to avoid doing too many conversions between formats, so let's make the image a square:</p>
<pre><code class="language-python"><span class="hljs-keyword">from</span> PIL <span class="hljs-keyword">import</span> Image
<span class="hljs-keyword">from</span> diffusers.utils <span class="hljs-keyword">import</span> load_image
source_image = load_image(
<span class="hljs-string">"https://huggingface.co/datasets/OzzyGT/testing-resources/resolve/main/diffusers_fill/jefferson-sees-OCQjiB4tG5c-unsplash.jpg"</span>
)
width, height = source_image.size
min_dimension = <span class="hljs-built_in">min</span>(width, height)
left = (width - min_dimension) / <span class="hljs-number">2</span>
top = (height - min_dimension) / <span class="hljs-number">2</span>
right = (width + min_dimension) / <span class="hljs-number">2</span>
bottom = (height + min_dimension) / <span class="hljs-number">2</span>
final_source = source_image.crop((left, top, right, bottom))
final_source = final_source.resize((<span class="hljs-number">1024</span>, <span class="hljs-number">1024</span>), Image.LANCZOS)
</code></pre>
<p>Then we need a mask, you can use any method to get it, you can use SAM2, BiRefNet or any of the newer models that lets you mask objects or it can be done manually, since this isn't about masking, I'll use the <a href="https://huggingface.co/spaces/stevhliu/inpaint-mask-maker">inpaint mask maker</a> to generate one.</p>
<p><a href="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/OSahO1ONI-mwk-OJRc017.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/OSahO1ONI-mwk-OJRc017.png"/></a></p>
<p>Now that we have the two images, what we need to do is to delete the masked part from the original, the result is the image we're going to feed the controlnet.</p>
<pre><code class="language-python">mask = load_image(
<span class="hljs-string">"https://huggingface.co/datasets/OzzyGT/testing-resources/resolve/main/diffusers_fill/car_mask.png"</span>
).convert(<span class="hljs-string">"L"</span>)
inverted_mask = ImageChops.invert(mask)
cnet_image = final_source.copy()
cnet_image.putalpha(inverted_mask)
</code></pre>
<p>This is the first part of the technique, the second one is to use a lighting model with less steps than a non-distilled model and also to use the controlnet tile mode at full strength for the whole steps so it preserves as much of the original image as possible.</p>
<p>I'll assume for this part the following:</p>
<ul>
<li>You downloaded the ControlNetModel_Union model python file and have it in the same directory as the script.</li>
<li>You have downloaded the controlnet model weights locally and renamed the files accordingly.</li>
</ul>
<p>The reason for the second one is that the official repo doesn't have easy to use format for the promax version of the model, if you want to see how to load it directly from the hub, you can read the official repository or look at the app code in the space.</p>
<pre><code class="language-python"><span class="hljs-keyword">import</span> torch
<span class="hljs-keyword">from</span> controlnet_union <span class="hljs-keyword">import</span> ControlNetModel_Union
<span class="hljs-keyword">from</span> diffusers <span class="hljs-keyword">import</span> AutoencoderKL, StableDiffusionXLControlNetPipeline, TCDScheduler
vae = AutoencoderKL.from_pretrained(<span class="hljs-string">"madebyollin/sdxl-vae-fp16-fix"</span>, torch_dtype=torch.float16).to(<span class="hljs-string">"cuda"</span>)
controlnet_model = ControlNetModel_Union.from_pretrained(
<span class="hljs-string">"./controlnet-union-sdxl-1.0"</span>,
torch_dtype=torch.float16,
)
pipe = StableDiffusionXLControlNetPipeline.from_pretrained(
<span class="hljs-string">"SG161222/RealVisXL_V5.0_Lightning"</span>,
torch_dtype=torch.float16,
vae=vae,
custom_pipeline=<span class="hljs-string">"OzzyGT/pipeline_sdxl_fill"</span>,
controlnet=controlnet_model,
variant=<span class="hljs-string">"fp16"</span>,
).to(<span class="hljs-string">"cuda"</span>)
pipe.scheduler = TCDScheduler.from_config(pipe.scheduler.config)
prompt = <span class="hljs-string">"high quality"</span>
(
prompt_embeds,
negative_prompt_embeds,
pooled_prompt_embeds,
negative_pooled_prompt_embeds,
) = pipe.encode_prompt(prompt, <span class="hljs-string">"cuda"</span>, <span class="hljs-literal">True</span>)
image = pipe(
prompt_embeds=prompt_embeds,
negative_prompt_embeds=negative_prompt_embeds,
pooled_prompt_embeds=pooled_prompt_embeds,
negative_pooled_prompt_embeds=negative_pooled_prompt_embeds,
image=cnet_image,
)
</code></pre>
<p>With this, we get something like this image:</p>
<p align="center">
<img src="https://huggingface.co/datasets/OzzyGT/testing-resources/resolve/main/diffusers_fill/weird_car_generation.png" width="600"/>
</p>
<p>I did on purpose a bad mask which leaves some details of the original car and makes the generation weird or bad, sometimes we get the borders of the car, or something like this one, I even got a buffalo!!!</p>
<p>So now that we know that the mask affects a lot the result, I'll do a more detailed one that I know it works with GIMP, since the mask will not be a pure white and black mask, we need to convert it to a binary mask.</p>
<pre><code class="language-python">mask = load_image(
<span class="hljs-string">"https://huggingface.co/datasets/OzzyGT/testing-resources/resolve/main/diffusers_fill/car_mask_good.png"</span>
).convert(<span class="hljs-string">"L"</span>)
binary_mask = mask.point(<span class="hljs-keyword">lambda</span> p: <span class="hljs-number">255</span> <span class="hljs-keyword">if</span> p > <span class="hljs-number">0</span> <span class="hljs-keyword">else</span> <span class="hljs-number">0</span>)
inverted_mask = ImageChops.invert(binary_mask)
</code></pre>
<p>My custom pipeline does a lot of the stuff you normally have to do, under the hood, so this means, set the steps, scales and the appropiate mode and image for the controlnet. You can use it if you want, but keep in mind that it's very restrictive and can be used mostly for what I use it in this guide.</p>
<p>Also take note that I use the TCD Scheduler for this, since is what I think works best with the lighting models, I also tried using PAG but it made the results worse for some reason.</p>
<p>Now we get something like this:</p>
<p align="center">
<img src="https://huggingface.co/datasets/OzzyGT/testing-resources/resolve/main/diffusers_fill/car_fill_generation.png" width="600"/>
</p>
<p>The last step for this guide is that, if you look closely, the image still gets changes, if the original image like this one has a good quality, you can see how it loses quality and some of the smaller details gets blurry, so to fix this we simply paste over the original alpha image and the beauty of this technique is that it merges seamesly, most people won't know it was inpainted if you don't tell them (I tested this).</p>
<div class="max-w-full overflow-auto">
<table>
<thead><tr>
<th>generation</th>
<th>merged</th>
</tr>
</thead><tbody><tr>
<td><a href="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/GSQaYRmMGuXgfQBH2PxjM.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/GSQaYRmMGuXgfQBH2PxjM.png"/></a></td>
<td><a href="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/aalKI7dkdFy8678zMVmUr.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/aalKI7dkdFy8678zMVmUr.png"/></a></td>
</tr>
</tbody>
</table>
</div>
<p>For this example, since the bushes has a lot of details, if you look closely you can see the transition, so dependending on you use case, it would be better to not do the final paste but again, most people won't even notice this.</p>
<p>Here are some more examples (you can use the space to test them too):</p>
<div class="max-w-full overflow-auto">
<table>
<thead><tr>
<th>original</th>
<th>fill</th>
</tr>
</thead><tbody><tr>
<td><a href="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/JkKqqnDnDipaf6fbJOkOt.jpeg" rel="nofollow"><img alt="image/jpeg" src="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/JkKqqnDnDipaf6fbJOkOt.jpeg"/></a></td>
<td><a href="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/Mh4QK-mzahK4Q1MiqF6DD.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/Mh4QK-mzahK4Q1MiqF6DD.png"/></a></td>
</tr>
<tr>
<td><a href="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/dKuiQi5jKFTIpmn9Qu77T.jpeg" rel="nofollow"><img alt="image/jpeg" src="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/dKuiQi5jKFTIpmn9Qu77T.jpeg"/></a></td>
<td><a href="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/tjZwj6gPljBD2Xq3ocFYC.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/tjZwj6gPljBD2Xq3ocFYC.png"/></a></td>
</tr>
</tbody>
</table>
</div>
<p>Credits for the images:</p>
<p>First one: <a href="https://unsplash.com/photos/a-blurry-photo-of-a-man-walking-past-a-restaurant-JuBjdYyxaAg" rel="nofollow">original</a> by <a href="https://unsplash.com/@leonardodbi" rel="nofollow">
Leonardo Iribe</a></p>
<p>Second one: <a href="https://unsplash.com/photos/a-woman-walking-across-a-bridge-over-a-body-of-water-BPoQtQl_6iE" rel="nofollow">original</a> by <a href="https://unsplash.com/@raymondpetrik" rel="nofollow">Raymond Petrik</a></p>
<p>There's also an added benefit that I plan to use for a new outpanting guide, and that is that you can expand an image, so this is ideal for generation a temporal background that we can use to add detail later.</p>
<div class="max-w-full overflow-auto">
<table>
<thead><tr>
<th>original</th>
<th>expanded</th>
</tr>
</thead><tbody><tr>
<td><a href="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/SHJyLWcbEkl6qJAVyVYTX.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/SHJyLWcbEkl6qJAVyVYTX.png"/></a></td>
<td><a href="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/WFiUGR4M-TZGxZqbEur1g.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/63df091910678851bb0cd0e0/WFiUGR4M-TZGxZqbEur1g.png"/></a></td>
</tr>
</tbody>
</table>
</div>
<p>To improve the results, I encourage you to use some more advanced techniques like:</p>
<ul>
<li>Use differential diffusion to merge the seams with the original image</li>
<li>Upscale the masked final generation, use it with img2img to add more details and then paste it back on the original.</li>
<li>Adapt this with better models (SD3 or Flux) when the controlnets gets as good as this one.</li>
</ul>
<p>That's it for this guide, I hope it helps to learn how to use this awesome controlnet and to give you a headstart on how to get good quality images that you can use in your work.</p>
<p>The final full code with the final merge:</p>
<pre><code class="language-python"><span class="hljs-keyword">import</span> torch
<span class="hljs-keyword">from</span> PIL <span class="hljs-keyword">import</span> Image, ImageChops
<span class="hljs-keyword">from</span> controlnet_union <span class="hljs-keyword">import</span> ControlNetModel_Union
<span class="hljs-keyword">from</span> diffusers <span class="hljs-keyword">import</span> AutoencoderKL, StableDiffusionXLControlNetPipeline, TCDScheduler
<span class="hljs-keyword">from</span> diffusers.utils <span class="hljs-keyword">import</span> load_image
source_image = load_image(
<span class="hljs-string">"https://huggingface.co/datasets/OzzyGT/testing-resources/resolve/main/diffusers_fill/jefferson-sees-OCQjiB4tG5c-unsplash.jpg"</span>
)
width, height = source_image.size
min_dimension = <span class="hljs-built_in">min</span>(width, height)
left = (width - min_dimension) / <span class="hljs-number">2</span>
top = (height - min_dimension) / <span class="hljs-number">2</span>
right = (width + min_dimension) / <span class="hljs-number">2</span>
bottom = (height + min_dimension) / <span class="hljs-number">2</span>
final_source = source_image.crop((left, top, right, bottom))
final_source = final_source.resize((<span class="hljs-number">1024</span>, <span class="hljs-number">1024</span>), Image.LANCZOS).convert(<span class="hljs-string">"RGBA"</span>)
mask = load_image(
<span class="hljs-string">"https://huggingface.co/datasets/OzzyGT/testing-resources/resolve/main/diffusers_fill/car_mask_good.png"</span>
).convert(<span class="hljs-string">"L"</span>)
binary_mask = mask.point(<span class="hljs-keyword">lambda</span> p: <span class="hljs-number">255</span> <span class="hljs-keyword">if</span> p > <span class="hljs-number">0</span> <span class="hljs-keyword">else</span> <span class="hljs-number">0</span>)
inverted_mask = ImageChops.invert(binary_mask)
alpha_image = Image.new(<span class="hljs-string">"RGBA"</span>, final_source.size, (<span class="hljs-number">0</span>, <span class="hljs-number">0</span>, <span class="hljs-number">0</span>, <span class="hljs-number">0</span>))
cnet_image = Image.composite(final_source, alpha_image, inverted_mask)
vae = AutoencoderKL.from_pretrained(<span class="hljs-string">"madebyollin/sdxl-vae-fp16-fix"</span>, torch_dtype=torch.float16).to(<span class="hljs-string">"cuda"</span>)
controlnet_model = ControlNetModel_Union.from_pretrained(
<span class="hljs-string">"./controlnet-union-sdxl-1.0"</span>,
torch_dtype=torch.float16,
)
pipe = StableDiffusionXLControlNetPipeline.from_pretrained(
<span class="hljs-string">"SG161222/RealVisXL_V5.0_Lightning"</span>,
torch_dtype=torch.float16,
vae=vae,
custom_pipeline=<span class="hljs-string">"OzzyGT/pipeline_sdxl_fill"</span>,
controlnet=controlnet_model,
variant=<span class="hljs-string">"fp16"</span>,
).to(<span class="hljs-string">"cuda"</span>)
pipe.scheduler = TCDScheduler.from_config(pipe.scheduler.config)
prompt = <span class="hljs-string">"high quality"</span>
(
prompt_embeds,
negative_prompt_embeds,
pooled_prompt_embeds,
negative_pooled_prompt_embeds,
) = pipe.encode_prompt(prompt, <span class="hljs-string">"cuda"</span>, <span class="hljs-literal">True</span>)
image = pipe(
prompt_embeds=prompt_embeds,
negative_prompt_embeds=negative_prompt_embeds,
pooled_prompt_embeds=pooled_prompt_embeds,
negative_pooled_prompt_embeds=negative_pooled_prompt_embeds,
image=cnet_image,
)
image = image.convert(<span class="hljs-string">"RGBA"</span>)
cnet_image.paste(image, (<span class="hljs-number">0</span>, <span class="hljs-number">0</span>), binary_mask)
cnet_image.save(<span class="hljs-string">"final_generation.png"</span>)
</code></pre>
|
All LLMs Write Great Code, But Some Make (A Lot) Fewer Mistakes | https://hf.co/blog/onekq/all-llms-write-great-code | A huge thank to 🤗HuggingFace🤗 |
Training Flux Locally on Mac | https://hf.co/blog/AlekseyCalvin/mac-flux-training |
<p>
For all those struggling to set this up right now.</p>
<p><em><strong><strong>(rearticulated by A.C.T. soon® from a post/repo by Hughescr and the ai-toolkit Flux training script by Ostris)</strong></strong></em></p>
<p>This workflow is not grounded in Diffusers. However, I have not yet encountered a working Diffusers implementation of local Flux training on Mac/mps. If such a workflow/pipeline exists, I would sincerely appreciate it if someone linked me to it (or/and advised me on implementation details). Such as via <a href="mailto:alekseycalvin@gmail.com" rel="nofollow">alekseycalvin@gmail.com</a>, or a comment somewhere... Like, say, one of my Flux LoRA repos here on Huggingface... (By the way, check them out? To improve Flux Schnell, use <a href="https://huggingface.co/AlekseyCalvin/historic_color_schnell">Historic Color Schnell</a>.)</p>
<p>But to the point, the repo to train locally on Mac is <a href="https://github.com/hughescr/ai-toolkit" rel="nofollow">here</a>, as a somewhat modified branch of Ostris' ai-toolkit training script git.</p>
<p><strong>Below sits the link to the</strong> ai-toolkit <strong>repo modified for MacOS:</strong>
<a href="https://github.com/hughescr/ai-toolkit" rel="nofollow">https://github.com/hughescr/ai-toolkit</a></p>
<p>To be clear, I'm not the person behind this branch, but myself finally stumbled upon it whilst seeking far and wide for many hours any extant Flux training solution adapted for MacOS/sillicon. So, if this works for you, then please thank that prodigious wizard Ostris (the developer of ai-toolkit training scripts), along with this Mac-oriented branch's mysterious author: a certain Hughescr. </p>
<p>Credit and solidarity further extends to all who -- in chronic scepticism of seemingly insurmountable limitations -- stubbornly tinker and quest for options, solutions, and possibilities. </p>
<p>In any case, on the basis of another guide post by Hughescr, plus a few notes/details added from myself for clarity, I just put together the below short guide on setting up local Flux training on Macs using ai-toolkit + the Hughescr branch. </p>
<p><em><strong>Take heed though: without further optimization, this is very unlikely to work on Mac systems with low unified memory!</strong></em> </p>
<p><strong><strong>WORKFLOW to TRAIN FLUX On Mac/OSX/mps:</strong></strong></p>
<p><strong>In Terminal, clone <strong><a href="https://github.com/hughescr/ai-toolkit" rel="nofollow">https://github.com/hughescr/ai-toolkit</a></strong>, following the Linux instructions in the</strong> README <strong>there.</strong></p>
<p><strong>As in:</strong></p>
<pre><code class="language-python">git clone https://github.com/hughescr/ai-toolkit
</code></pre>
<p><strong>Then travel over to the cloned directory:</strong></p>
<pre><code class="language-python">cd ai-toolkit
</code></pre>
<p><strong>Do this:</strong></p>
<pre><code class="language-python">git submodule update --init --recursive
</code></pre>
<p><strong>Then make a virtual environment from the same folder:</strong></p>
<pre><code class="language-python">python3 -m venv venv
</code></pre>
<p><strong>Activate it:</strong></p>
<pre><code class="language-python">source venv/<span class="hljs-built_in">bin</span>/activate
</code></pre>
<p><strong>Install</strong> PyTorch <strong>:</strong></p>
<pre><code class="language-python">pip3 install torch
</code></pre>
<p><strong>Install requirements for the</strong> ai-toolkit, <strong>which should also extend it with certain submodules updated/introduced by Hughescr:</strong></p>
<pre><code class="language-python">pip3 install -r requirements.txt
</code></pre>
<p>**<strong>Here's a list of all that Hughescr introduced to that branch of Ostris'</strong> ai-toolkit training script, <strong>in order to adapt it for Mac OSX (I quote the below from their post):</strong></p>
<p><strong>-- Using</strong> torch.amp <strong>instead of</strong> torch.cuda.amp <strong>(which should work for CUDA too, but will allow MPS to work, using an MPS-compatible GradScaler).</strong></p>
<p><strong>-- Force-using</strong> spawn <strong>instead of</strong> fork <strong>for multiprocessing.</strong></p>
<p><strong>-- Turning off the T5 quantizer, because this won't work on MPS.</strong></p>
<p><strong>-- Forcing the dataloader to have</strong> num_workers=0 <strong>(otherwise the process breaks on Mac).</strong>
<em>This may be done by adding</em> "num_workers=0" <em>to your <strong>config file</strong> for a prospective training: in this context, this would be your variant of one of the</em> (.yaml) <em>template configs from</em> /ai-toolkit/config/examples. <em>The Hughescr branch of ai-toolkit is supposed to already pre-enforce this particular option, even irrespectively of the config, but it might be better to make doubly sure manually.</em></p>
<p><em>On a side note for those aspiring Flux remodelers who are new to training scripts or relatively code-fresh: The template config file, typically in a file format of either</em> .yaml <em>or</em> .json <em>(such as for</em> <a href="https://github.com/kohya-ss/sd-scripts" rel="nofollow">Kohya ss</a><em>) is an essential component of launching a local training (at least without some GUI interface container/app), and typically carries strict internal formatting rules, corresponding to its data type broadly and/or family/architecture of trainers more specifically. As such, whilst specifying</em> "num_workers=0" <em>or or filling in your training parameters or modifying anything else within a config .yaml (or .json, etc), make sure to match closely the format and syntax found throughout the config template! Else get condemned to exasperating backtracking come runtime.</em> </p>
<p><em>Relatedly, the</em> /ai-toolkit <em>local trainer scripts folder contains a wide range of template configs, not just for training Flux, but for many other sorts of models as well. There's much there to explore and potentially try. Instrumental to our given case, however, are the specific template config</em> .yaml <em>files for</em> <a href="https://huggingface.co/black-forest-labs/FLUX.1-dev">Flux Dev</a> <em>and</em> <a href="/blog/AlekseyCalvin/black-forest-labs/FLUX.1-schnell">Flux Schnell</a>. <em>These configs,</em> train_lora_flux_24gb.yaml <em>and</em> train_lora_flux_schnell_24gb.yaml, <em>are found in the</em> <strong>/config/examples/</strong> <em>subfolder of the cloned-in</em> <strong>/ai-toolkit</strong> <em>folder: the relevant config (for either training Dev or Schnell) is meant to get duplicated by you and thereafter modified for your use with the training script. These configs may then be brought in as an argument to the</em> run.py <em>launcher, if you want to launch the trainer all at once directly from the config and through the Terminal. Or the config could be dragged into/opened via a dedicated UI. The built-in ai-toolkit UI may be launched from the same</em> <strong>/ai-toolkit</strong> <em>folder, via the</em> flux_train_ui.py <em>Python executable.)</em></p>
<pre><code> 🌕🌖🌗🌘🌑🌒🌓🌔🌕🌖🌗🌘🌑🌒🌓🌔🌕🌖🌗🌘🌑🌒🌓🌔🌕
</code></pre>
<p><strong>NOW, to run/modify the script, follow further usage instructions here:</strong></p>
<p><a href="https://github.com/hughescr/ai-toolkit#tutorial" rel="nofollow">https://github.com/hughescr/ai-toolkit#tutorial</a></p>
<p><strong>Finally, in order to side-step functions un-implemented in MPS (thus far), one needs to launch the Python executable for training script with the following argument:</strong></p>
<pre><code class="language-python">PYTORCH_ENABLE_MPS_FALLBACK=<span class="hljs-number">1</span>
</code></pre>
<p><strong>This is basically a way of enabling selective/temporary CPU-offload of operations unable to work with MPS/sillicon.</strong></p>
<p><strong>As in:</strong></p>
<pre><code class="language-python">PYTORCH_ENABLE_MPS_FALLBACK=<span class="hljs-number">1</span> python run.py config/your-custom-config-file.yaml
</code></pre>
<p><strong>This should launch the custom training config, and thereby the training itself!</strong></p>
<p><strong>Lastly, just for clarity, and in case anyone reading this is new to manually-launched training, I will reiterate:</strong></p>
<p><strong>To specify stuff like, say,</strong> <em>dataset folder location input, model output folder location, trigger phrase/token, learning rate, optimizer, etc..</em>. <strong>one must duplicate and modify the</strong> .yaml <strong>config file from the</strong> /config <strong>or the</strong> /config/examples/ <strong>subfolder of your</strong> /ai-toolkit <strong>folder...</strong></p>
<p><strong>NOW GO AND TRY THIS!</strong></p>
<p><em><strong>Sincerely,</strong></em> </p>
<p><em><strong>A.C.T. SOON®</strong></em></p>
|
The Impact of Real-Time Summarization on Decision-Making | https://hf.co/blog/megoyaw3/impact-of-real-time-summarization | Final Words! |
Improving performance with Arena Learning in post training | https://hf.co/blog/satpalsr/arena-learning-post-train-data-performance-improve | References |
Fine Tuning a LLM Using Kubernetes with Intel® Gaudi® Accelerator | https://hf.co/blog/omarkhleif/gaudi-k8s-llm-finetuning | Citations |
Introducing AISAK-O | https://hf.co/blog/mandelakori/aisak-o | Beta Testing Opportunity |
Full Training Tutorial and Guide and Research For a FLUX Style | https://hf.co/blog/MonsterMMORPG/full-training-tutorial-and-research-for-flux-style | More Example Images - Last One Is Trained Dataset |
Fine-tuning a token classification model for legal data using Argilla and AutoTrain | https://hf.co/blog/bikashpatra/legal-data-token-classification-fine-tuning | 9. Acknowledgements |
Llama-3.1 8B Carrot - Capx AI | https://hf.co/blog/adarshxs/capx-vision | Conclusion |
Getty Images Brings High-Quality, Commercially Safe Dataset to Hugging Face | https://hf.co/blog/andreagagliano/gettyimages-brings-dataset-to-huggingface |
<p>
<em>Andrea Gagliano, Head of AI/ML at Getty Images</em></p>
<p>Hey Hugging Face community! We are Getty Images, and we’re excited to partner with Hugging Face to share something we think you’ll love – AI/ML scientists are now able to access a new sample dataset of our own wholly owned creative images and associated structured metadata that we’re making available right here on Hugging Face. </p>
<p>The <a href="https://huggingface.co/datasets/GettyImages/Getty-Images-Dataset">Getty Images Sample Dataset</a> includes 3,750 high-quality images from 15 categories, providing a wide range of visuals for various applications. If you’re into building generative AI models or enhancing ML capabilities that not only look good but are also built responsibly and safe for commercial use, this is for you.</p>
<p><a href="https://cdn-uploads.huggingface.co/production/uploads/65cbb52fab05dc46afbe5184/8xX6wcHmFT779VR7YDfxm.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/65cbb52fab05dc46afbe5184/8xX6wcHmFT779VR7YDfxm.png"/></a></p>
<p>For those who might not be familiar with <a href="https://www.gettyimages.com" rel="nofollow">Getty Images</a> or are scratching your heads wondering why you’ve found us on Hugging Face, know that we’re passionate about visual content, and we know many of you are too. For those who need an introduction we are a leading global visual content creator and marketplace, and the first-place people turn to discover, purchase and share powerful visual content from the world’s best photographers and videographers. </p>
<p>What you may not know about us, is that we also think that building AI/ML capabilities is as much about the data as it is about the algorithms. You can have the best model architecture, but if your data isn’t up to par, your outputs won’t be either. </p>
<p>That’s why we’ve curated a sample dataset that’s packed with high-quality images and rich metadata. Our data represents the cleanest, highest quality creative photo open dataset available, offering you: </p>
<ul>
<li><p>Consistently high-quality images, free from low-resolution issues </p>
</li>
<li><p>Rich structured metadata that helps your models understand context better </p>
</li>
<li><p>A curated selection without excessive infographics and NSFW content </p>
</li>
<li><p>No unwanted celebrity images, no trademark brands, products or characters, or identifiable people or locations in your training data </p>
</li>
<li><p>Detailed information on usage rights, ensuring peace of mind.</p>
</li>
</ul>
<p><strong>Building with Responsibility</strong> </p>
<p>What we are also passionate about is respecting the rights of creators and sustaining ongoing creation by obtaining consent from rights holders for AI training. This means that this sample dataset is commercially safe, meaning you can focus on building and innovating without worrying about accidentally infringing on someone’s rights. </p>
<p>But what does ‘commercially safe’ really mean? To us this means that our datasets are free from misappropriated training data. It means our dataset is clean and made up of licensed creative pre-shot visuals (not editorial). It means that the resulting outputs will not generate an image that includes trademark brands, products or characters, or identifiable people or locations. </p>
<p>Plus, if you go on to license a full data set from us you will be contributing to a more sustainable ecosystem. Revenue from our training data licensing goes back to the creators, supporting the artists and photographers who made these images possible. It’s a way to innovate responsibly and ensure that everyone involved in the creative process benefits. </p>
<p>We’re not just dropping this sample dataset and disappearing—we want to be part of the conversation on the Hub. We’re here to collaborate, share insights, and see what incredible things the Hugging Face community will create with this data. Whether you’re refining an existing model or starting from scratch, we’re excited to see how you’ll push the boundaries. </p>
|
LLM Inference at scale with TGI | https://hf.co/blog/martinigoyanes/llm-inference-at-scale-with-tgi | Relevant metrics per use case |
Meet Yi-Coder: A Small but Mighty LLM for Code | https://hf.co/blog/lorinma/yi-coder | Citation |
Converting Models to Core ML | https://hf.co/blog/fguzman82/frompytorch-to-coreml | References and Resources |
The Environmental Impacts of AI -- Primer | https://hf.co/blog/sasha/ai-environment-primer | 📕 References 📕 |
10 Star Webflow (no-code) Players Providing Premium Services | https://hf.co/blog/megoyaw3/best-webflow-players-in-the-market | 10. Creativecorner |
Selective fine-tuning of Language Models with Spectrum | https://hf.co/blog/anakin87/spectrum | Main References |
Key Insights into the Law of Vision Representations in MLLMs | https://hf.co/blog/Borise/law-vision-representation-in-mllms | In the end |
Extending *Transformer layers as Painters* to DiT's | https://hf.co/blog/NagaSaiAbhinay/transformer-layers-as-painters-dit | References & Citations |
To what extent are we responsible for our content and how to create safer Spaces? | https://hf.co/blog/davidberenstein1957/responsibility-for-ai-content-and-safer-spaces |
<p>
This is a brief blog that outlines some thoughts surrounding the question: To what extent are we responsible for our content and how to create safer Spaces? Certainly relevant for the Telegram CEO Pavel Durov but not less important for people like you and me.</p>
<p>😅 My own "oops"-moment. I created a space with a Flux model and it resulted in some inappropriate content generation. So, I had a small discussion about creating safe AI with some colleagues over at Hugging Face. Here’s what you can do!👇</p>
<p>🔦 The ethics team has a nice collection of tools and ideas to help owners secure their code and prevent misuse. Several ways to create safer spaces can be found here. <a href="https://huggingface.co/collections/society-ethics/provenance-watermarking-and-deepfake-detection-65c6792b0831983147bb7578">https://huggingface.co/collections/society-ethics/provenance-watermarking-and-deepfake-detection-65c6792b0831983147bb7578</a></p>
<p>📷 Use AI classifiers to filter out harmful or inappropriate content. It’s a simple but effective way to stop misuse in its tracks. For stable diffusion, we have implemented a basic baseline to block basic keywords and terms. <a href="https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/safety_checker.py" rel="nofollow">https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/safety_checker.py</a></p>
<p>📊 Track Usage: Consider monitoring user activities in some way, like logging IP addresses. While there are privacy concerns and GDPR-related caveats, it helps to detect and prevent abuse.</p>
<p>⚓ Most content platforms fall under the international safe harbour principle, which does not hold them accountable for illegal content if they don't know it is there (privacy-related you simply can't), and if they act promptly when they do.
<a href="https://en.wikipedia.org/wiki/International_Safe_Harbor_Privacy_Principles" rel="nofollow">https://en.wikipedia.org/wiki/International_Safe_Harbor_Privacy_Principles</a></p>
<p>📜 Clear Guidelines: Set transparent usage policies. Make sure users understand what’s acceptable and what the consequences are for breaking the rules. We have some at Hugging Face too. <a href="https://huggingface.co/content-guidelines">https://huggingface.co/content-guidelines</a></p>
<p>⚖️ Open Source Legal clauses for products using LLMs: This morning I saw this post from Gideon Mendels from Comet ML that shared public legal clauses that should cover common risky scenarios around the usage of LLMs in production. <a href="https://gist.github.com/gidim/18e1685f6a47b235e393e57bad89d454" rel="nofollow">https://gist.github.com/gidim/18e1685f6a47b235e393e57bad89d454</a></p>
<p>Thanks for the discussion 🤓
Noemie Chirokoff, Margaret Mitchell, Omar Sanseviero, Bruna Sellin Trevelin</p>
|
Understanding Vector Quantization in VQ-VAE | https://hf.co/blog/ariG23498/understand-vq | Bringing it together |
DEMO: French Spoken Language Understanding with the new speech resources from NAVER LABS Europe | https://hf.co/blog/mzboito/naver-demo-french-slu | Aknowledgments: |
How to integrate Apify with Huggging Face | https://hf.co/blog/airabbitX/how-to-integrate-apify-with-huggging-face | Conclusion |
How to Use SSAST Model Weights in the HuggingFace Ecosystem? | https://hf.co/blog/Syoy/use-ssast-model-weights-with-huggingface | References |
Searching for better (Full) ImageNet ViT Baselines | https://hf.co/blog/rwightman/vit-sbb-imagenet-full |
<p>
<code>timm</code> 1.0.9 was just released. Included are a few new ImageNet-12k and ImageNet-12k -> ImageNet-1k weights in my <a href="https://huggingface.co/collections/timm/searching-for-better-vit-baselines-663eb74f64f847d2f35a9c19">Searching for Better ViT Baselines</a> series. </p>
<div class="max-w-full overflow-auto">
<table>
<thead><tr>
<th>model</th>
<th>top1</th>
<th>top5</th>
<th>param_count</th>
<th>img_size</th>
</tr>
</thead><tbody><tr>
<td><a href="https://huggingface.co/timm/vit_mediumd_patch16_reg4_gap_384.sbb2_e200_in12k_ft_in1k">vit_mediumd_patch16_reg4_gap_384.sbb2_e200_in12k_ft_in1k</a></td>
<td>87.438</td>
<td>98.256</td>
<td>64.11</td>
<td>384</td>
</tr>
<tr>
<td><a href="https://huggingface.co/timm/vit_mediumd_patch16_reg4_gap_256.sbb2_e200_in12k_ft_in1k">vit_mediumd_patch16_reg4_gap_256.sbb2_e200_in12k_ft_in1k</a></td>
<td>86.608</td>
<td>97.934</td>
<td>64.11</td>
<td>256</td>
</tr>
<tr>
<td><a href="https://huggingface.co/timm/vit_betwixt_patch16_reg4_gap_384.sbb2_e200_in12k_ft_in1k">vit_betwixt_patch16_reg4_gap_384.sbb2_e200_in12k_ft_in1k</a></td>
<td>86.594</td>
<td>98.02</td>
<td>60.4</td>
<td>384</td>
</tr>
<tr>
<td><a href="https://huggingface.co/timm/vit_betwixt_patch16_reg4_gap_256.sbb2_e200_in12k_ft_in1k">vit_betwixt_patch16_reg4_gap_256.sbb2_e200_in12k_ft_in1k</a></td>
<td>85.734</td>
<td>97.61</td>
<td>60.4</td>
<td>256</td>
</tr>
</tbody>
</table>
</div>
<p>I'd like to highlight these models as they're on the pareto front for ImageNet-12k / ImageNet-22k models. It is interesting to look at models with comparable ImageNet-22k fine-tunes to see how competitive (near) vanilla ViTs are with other architectures. With optimized attention kernels enabled (default in <code>timm</code>), they are well ahead of Swin and holding up just fine relative to ConvNeXt, etc. </p>
<p>Something else worth pointing out, the <code>deit3</code> model weights are quite remarkable and underappreciated set of weights. The upper end of my <code>sbb</code> weights are matching <code>deit3</code> at equivalent compute -- it's also a great recipe. Though, one of my goals with <code>sbb</code> recipes was to allow easier fine-tuning. In opting for a less exotic augmentation scheme, sticking with AdamW, and sacrificing some top-1 (higher weight-decay), I feel that was achieved. Through several fine-tune trials I've found the sbb ViT weights to be easier to fit to other, especially smaller datasets (Oxford Pets, RESISC, etc) w/ short runs.</p>
<p>NOTE: all throughput measurements were done on an RTX 4090, AMP /w torch.compile() enabled, PyTorch 2.4, Cuda 12.4.</p>
<p><a href="https://cdn-uploads.huggingface.co/production/uploads/604a5184dca2c7ac7508b849/Cc_ERcZ5yRZCl9PjlOKSv.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/604a5184dca2c7ac7508b849/Cc_ERcZ5yRZCl9PjlOKSv.png"/></a></p>
<p><strong>Bold rows</strong>: Pareto frontier models</p>
<div class="max-w-full overflow-auto">
<table>
<thead><tr>
<th>model</th>
<th>img_size</th>
<th>samples_per_sec</th>
<th>top1</th>
<th>top5</th>
<th>param_count</th>
</tr>
</thead><tbody><tr>
<td><strong><a href="http://hf.co/timm/deit3_base_patch16_224.fb_in22k_ft_in1k" rel="nofollow">deit3_base_patch16_224.fb_in22k_ft_in1k</a></strong></td>
<td><strong>224</strong></td>
<td><strong>3326.85</strong></td>
<td><strong>85.73</strong></td>
<td><strong>97.75</strong></td>
<td><strong>86.59</strong></td>
</tr>
<tr>
<td><strong><a href="http://hf.co/timm/vit_betwixt_patch16_reg4_gap_256.sbb2_e200_in12k_ft_in1k" rel="nofollow">vit_betwixt_patch16_reg4_gap_256.sbb2_e200_in12k_ft_in1k</a></strong></td>
<td><strong>256</strong></td>
<td><strong>3302.28</strong></td>
<td><strong>85.73</strong></td>
<td><strong>97.61</strong></td>
<td><strong>60.40</strong></td>
</tr>
<tr>
<td><a href="http://hf.co/timm/vit_base_patch16_224.augreg2_in21k_ft_in1k" rel="nofollow">vit_base_patch16_224.augreg2_in21k_ft_in1k</a></td>
<td>224</td>
<td>3278.15</td>
<td>85.11</td>
<td>97.54</td>
<td>86.57</td>
</tr>
<tr>
<td><a href="http://hf.co/timm/vit_base_patch16_224.augreg_in21k_ft_in1k" rel="nofollow">vit_base_patch16_224.augreg_in21k_ft_in1k</a></td>
<td>224</td>
<td>3274.99</td>
<td>84.53</td>
<td>97.30</td>
<td>86.57</td>
</tr>
<tr>
<td><strong><a href="http://hf.co/timm/vit_mediumd_patch16_reg4_gap_256.sbb2_e200_in12k_ft_in1k" rel="nofollow">vit_mediumd_patch16_reg4_gap_256.sbb2_e200_in12k_ft_in1k</a></strong></td>
<td><strong>256</strong></td>
<td><strong>2761.64</strong></td>
<td><strong>86.60</strong></td>
<td><strong>97.94</strong></td>
<td><strong>64.11</strong></td>
</tr>
<tr>
<td><strong><a href="http://hf.co/timm/caformer_m36.sail_in22k_ft_in1k" rel="nofollow">caformer_m36.sail_in22k_ft_in1k</a></strong></td>
<td><strong>224</strong></td>
<td><strong>2345.11</strong></td>
<td><strong>86.61</strong></td>
<td><strong>98.04</strong></td>
<td><strong>56.20</strong></td>
</tr>
<tr>
<td><a href="http://hf.co/timm/convformer_m36.sail_in22k_ft_in1k" rel="nofollow">convformer_m36.sail_in22k_ft_in1k</a></td>
<td>224</td>
<td>2319.68</td>
<td>86.15</td>
<td>97.85</td>
<td>57.05</td>
</tr>
<tr>
<td><a href="http://hf.co/timm/swin_base_patch4_window7_224.ms_in22k_ft_in1k" rel="nofollow">swin_base_patch4_window7_224.ms_in22k_ft_in1k</a></td>
<td>224</td>
<td>2176.48</td>
<td>85.27</td>
<td>97.57</td>
<td>87.77</td>
</tr>
<tr>
<td><a href="http://hf.co/timm/regnety_160.sw_in12k_ft_in1k" rel="nofollow">regnety_160.sw_in12k_ft_in1k</a></td>
<td>224</td>
<td>2098.25</td>
<td>85.59</td>
<td>97.67</td>
<td>83.59</td>
</tr>
<tr>
<td><a href="http://hf.co/timm/coatnet_2_rw_224.sw_in12k_ft_in1k" rel="nofollow">coatnet_2_rw_224.sw_in12k_ft_in1k</a></td>
<td>224</td>
<td>1753.63</td>
<td>86.58</td>
<td>97.90</td>
<td>73.87</td>
</tr>
<tr>
<td><strong><a href="http://hf.co/timm/vit_betwixt_patch16_reg4_gap_384.sbb2_e200_in12k_ft_in1k" rel="nofollow">vit_betwixt_patch16_reg4_gap_384.sbb2_e200_in12k_ft_in1k</a></strong></td>
<td><strong>384</strong></td>
<td><strong>1467.64</strong></td>
<td><strong>86.60</strong></td>
<td><strong>98.02</strong></td>
<td><strong>60.60</strong></td>
</tr>
<tr>
<td><strong><a href="http://hf.co/timm/convnext_large.fb_in22k_ft_in1k" rel="nofollow">convnext_large.fb_in22k_ft_in1k</a></strong></td>
<td><strong>224</strong></td>
<td><strong>1457.60</strong></td>
<td><strong>86.61</strong></td>
<td><strong>98.04</strong></td>
<td><strong>197.77</strong></td>
</tr>
<tr>
<td><a href="http://hf.co/timm/convnext_small.in12k_ft_in1k_384" rel="nofollow">convnext_small.in12k_ft_in1k_384</a></td>
<td>384</td>
<td>1350.43</td>
<td>86.19</td>
<td>97.92</td>
<td>50.22</td>
</tr>
<tr>
<td><a href="http://hf.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k_288" rel="nofollow">seresnextaa101d_32x8d.sw_in12k_ft_in1k_288</a></td>
<td>288</td>
<td>1297.79</td>
<td>86.54</td>
<td>98.09</td>
<td>93.59</td>
</tr>
<tr>
<td><a href="http://hf.co/timm/regnety_160.sw_in12k_ft_in1k" rel="nofollow">regnety_160.sw_in12k_ft_in1k</a></td>
<td>288</td>
<td>1260.01</td>
<td>86.03</td>
<td>97.83</td>
<td>83.59</td>
</tr>
<tr>
<td><a href="http://hf.co/timm/swin_large_patch4_window7_224.ms_in22k_ft_in1k" rel="nofollow">swin_large_patch4_window7_224.ms_in22k_ft_in1k</a></td>
<td>224</td>
<td>1243.73</td>
<td>86.33</td>
<td>97.88</td>
<td>196.53</td>
</tr>
<tr>
<td><strong><a href="http://hf.co/timm/vit_mediumd_patch16_reg4_gap_384.sbb2_e200_in12k_ft_in1k" rel="nofollow">vit_mediumd_patch16_reg4_gap_384.sbb2_e200_in12k_ft_in1k</a></strong></td>
<td><strong>384</strong></td>
<td><strong>1214.59</strong></td>
<td><strong>87.44</strong></td>
<td><strong>98.26</strong></td>
<td><strong>64.27</strong></td>
</tr>
<tr>
<td><a href="http://hf.co/timm/deit3_base_patch16_384.fb_in22k_ft_in1k" rel="nofollow">deit3_base_patch16_384.fb_in22k_ft_in1k</a></td>
<td>384</td>
<td>1098.30</td>
<td>86.74</td>
<td>98.11</td>
<td>86.88</td>
</tr>
<tr>
<td><a href="http://hf.co/timm/deit3_large_patch16_224.fb_in22k_ft_in1k" rel="nofollow">deit3_large_patch16_224.fb_in22k_ft_in1k</a></td>
<td>224</td>
<td>1042.41</td>
<td>86.99</td>
<td>98.24</td>
<td>304.37</td>
</tr>
<tr>
<td><a href="http://hf.co/timm/vit_large_patch16_224.augreg_in21k_ft_in1k" rel="nofollow">vit_large_patch16_224.augreg_in21k_ft_in1k</a></td>
<td>224</td>
<td>1041.47</td>
<td>85.85</td>
<td>97.83</td>
<td>304.33</td>
</tr>
<tr>
<td><a href="http://hf.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k_288" rel="nofollow">seresnextaa101d_32x8d.sw_in12k_ft_in1k_288</a></td>
<td>320</td>
<td>1035.83</td>
<td>86.72</td>
<td>98.18</td>
<td>93.59</td>
</tr>
<tr>
<td><a href="http://hf.co/timm/convnext_xlarge.fb_in22k_ft_in1k" rel="nofollow">convnext_xlarge.fb_in22k_ft_in1k</a></td>
<td>224</td>
<td>921.30</td>
<td>86.97</td>
<td>98.20</td>
<td>350.20</td>
</tr>
<tr>
<td><strong><a href="http://hf.co/timm/convnext_large.fb_in22k_ft_in1k" rel="nofollow">convnext_large.fb_in22k_ft_in1k</a></strong></td>
<td><strong>288</strong></td>
<td><strong>881.61</strong></td>
<td><strong>87.01</strong></td>
<td><strong>98.21</strong></td>
<td><strong>197.77</strong></td>
</tr>
<tr>
<td><strong><a href="http://hf.co/timm/caformer_m36.sail_in22k_ft_in1k_384" rel="nofollow">caformer_m36.sail_in22k_ft_in1k_384</a></strong></td>
<td><strong>384</strong></td>
<td><strong>794.45</strong></td>
<td><strong>87.47</strong></td>
<td><strong>98.31</strong></td>
<td><strong>56.20</strong></td>
</tr>
<tr>
<td><a href="http://hf.co/timm/efficientnet_b5.sw_in12k_ft_in1k" rel="nofollow">efficientnet_b5.sw_in12k_ft_in1k</a></td>
<td>448</td>
<td>729.86</td>
<td>85.89</td>
<td>97.74</td>
<td>30.39</td>
</tr>
<tr>
<td><a href="http://hf.co/timm/convnext_xlarge.fb_in22k_ft_in1k" rel="nofollow">convnext_xlarge.fb_in22k_ft_in1k</a></td>
<td>288</td>
<td>559.14</td>
<td>87.37</td>
<td>98.33</td>
<td>350.20</td>
</tr>
<tr>
<td><a href="http://hf.co/timm/swin_base_patch4_window12_384.ms_in22k_ft_in1k" rel="nofollow">swin_base_patch4_window12_384.ms_in22k_ft_in1k</a></td>
<td>384</td>
<td>522.86</td>
<td>86.44</td>
<td>98.07</td>
<td>87.90</td>
</tr>
<tr>
<td><a href="http://hf.co/timm/convnext_large.fb_in22k_ft_in1k_384" rel="nofollow">convnext_large.fb_in22k_ft_in1k_384</a></td>
<td>384</td>
<td>500.83</td>
<td>87.46</td>
<td>98.38</td>
<td>197.77</td>
</tr>
<tr>
<td><strong><a href="http://hf.co/timm/maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k" rel="nofollow">maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k</a></strong></td>
<td><strong>384</strong></td>
<td><strong>456.17</strong></td>
<td><strong>87.48</strong></td>
<td><strong>98.37</strong></td>
<td><strong>116.09</strong></td>
</tr>
<tr>
<td><a href="http://hf.co/timm/coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k" rel="nofollow">coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k</a></td>
<td>384</td>
<td>404.42</td>
<td>87.40</td>
<td>98.31</td>
<td>73.88</td>
</tr>
<tr>
<td><a href="http://hf.co/timm/seresnextaa201d_32x8d.sw_in12k_ft_in1k_384" rel="nofollow">seresnextaa201d_32x8d.sw_in12k_ft_in1k_384</a></td>
<td>384</td>
<td>365.65</td>
<td>87.31</td>
<td>98.33</td>
<td>149.39</td>
</tr>
<tr>
<td><strong><a href="http://hf.co/timm/deit3_large_patch16_384.fb_in22k_ft_in1k" rel="nofollow">deit3_large_patch16_384.fb_in22k_ft_in1k</a></strong></td>
<td><strong>384</strong></td>
<td><strong>342.41</strong></td>
<td><strong>87.73</strong></td>
<td><strong>98.51</strong></td>
<td><strong>304.76</strong></td>
</tr>
<tr>
<td><a href="http://hf.co/timm/vit_large_patch16_384.augreg_in21k_ft_in1k" rel="nofollow">vit_large_patch16_384.augreg_in21k_ft_in1k</a></td>
<td>384</td>
<td>338.21</td>
<td>87.09</td>
<td>98.31</td>
<td>304.72</td>
</tr>
<tr>
<td><a href="http://hf.co/timm/swin_large_patch4_window12_384.ms_in22k_ft_in1k" rel="nofollow">swin_large_patch4_window12_384.ms_in22k_ft_in1k</a></td>
<td>384</td>
<td>315.38</td>
<td>87.14</td>
<td>98.23</td>
<td>196.74</td>
</tr>
<tr>
<td><a href="http://hf.co/timm/swinv2_base_window12to24_192to384.ms_in22k_ft_in1k" rel="nofollow">swinv2_base_window12to24_192to384.ms_in22k_ft_in1k</a></td>
<td>384</td>
<td>297.03</td>
<td>87.14</td>
<td>98.23</td>
<td>87.92</td>
</tr>
<tr>
<td><a href="http://hf.co/timm/swinv2_large_window12to24_192to384.ms_in22k_ft_in1k" rel="nofollow">swinv2_large_window12to24_192to384.ms_in22k_ft_in1k</a></td>
<td>384</td>
<td>186.30</td>
<td>87.47</td>
<td>98.26</td>
<td>196.74</td>
</tr>
</tbody>
</table>
</div>
|
Introducing AuraFace: Open-Source Face Recognition and Identity Preservation Models | https://hf.co/blog/isidentical/auraface | Try It Out |
Efficient Deep Learning: A Comprehensive Overview of Optimization Techniques 👐 📚 | https://hf.co/blog/Isayoften/optimization-rush | References |
MicroJAX | https://hf.co/blog/joey00072/microjax | Pytree |
2D Parallelism using Ray PyTorch | https://hf.co/blog/huseinzol05/2d-parallelism-ray-pytorch | 2D Parallelism |
Social Bias NER with BERT | https://hf.co/blog/maximuspowers/bias-entity-recognition | Resources: |
Easy, Fast, and Effective Topic Modeling For Beginners with FASTopic | https://hf.co/blog/bobxwu/fastopic | Tutorial: Use FASTopic to analyze the News of the New York Times. |
Building DoRA Support for Embedding Layers in PEFT | https://hf.co/blog/ariG23498/peft-dora | Conclusion: The Joy of Contributing to Open Source |
How No-Code Platforms Are Making Tech More Accessible to Everyone | https://hf.co/blog/megoyaw3/no-code-platforms-makes-tech-more-accessible | Conclusion |
Processing Parquets 102 | https://hf.co/blog/hlky/processing-parquets-102 | Conclusion |
How to build an incremental Web Crawler with Apify | https://hf.co/blog/airabbitX/a-step-by-step-guide-to-integrating-apify-and-hugg | Advanced Setup with Follow-Up Task |
How to communicate in a Pull Request? | https://hf.co/blog/ariG23498/comm-pr |
<p>
<a href="https://cdn-uploads.huggingface.co/production/uploads/608aabf24955d2bfc3cd99c6/oJDsjjFA53jL5AEGUd0Ai.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/608aabf24955d2bfc3cd99c6/oJDsjjFA53jL5AEGUd0Ai.png"/></a></p>
<p>Hi there! I'm Aritra, and let me tell you, I didn't have a clue about Open Source contributions or GitHub until 2018. My first PR to a big repository was a simple typo fix in the TensorFlow documentation (you can check it out <a href="https://github.com/tensorflow/docs/pull/1631" rel="nofollow">here</a>, though it never got in).</p>
<p>Fast forward a few years, and I've been fortunate enough to contribute to some incredible libraries like <a href="https://github.com/huggingface/peft/pulls?q=author%3AariG23498+" rel="nofollow">peft</a>, <a href="https://github.com/huggingface/transformers/pulls?q=author%3AariG23498+" rel="nofollow">transformers</a>, and more. But enough about me — let's talk about something that’s really important when contributing to open source: how to communicate in a pull request (PR).</p>
<ol>
<li>Find or Create an Issue First</li>
</ol>
<p>Before diving into the code, the first thing you should do is search for an <em>existing</em> issue that you’d like to fix. If there isn’t one, create it yourself. This is your first step in engaging with the maintainers, who will likely jump in and brainstorm with you.</p>
<p>Trust me, this part of the process is incredibly rewarding. You get to interact with people who might be a lot smarter than you (I always think of it this way, which helps me stay grounded and not feel overwhelmed). You will most certainly learn a lot about the repository and how things work.</p>
<p>To see an example, <a href="https://github.com/huggingface/peft/issues/1940" rel="nofollow">here</a>'s where I learnt about a new LoRA technique which was mind blowing.</p>
<ol start="2">
<li>Crafting Your PR</li>
</ol>
<p>Once you’ve identified the issue and are ready to create a PR, take your time with the description. This is not just a formality; it’s your chance to clearly communicate what your PR is about. The <strong>title</strong> and <strong>description</strong> should be precise and to the point.</p>
<p>I have never been able to perfect this craft, but I always try. I take immense pride in <a href="https://github.com/keras-team/keras-nlp/pull/1662" rel="nofollow">this PR</a> description and title.</p>
<p>This clarity is crucial for maintainers and others in the community to understand your intentions and join the conversation. Who knows? Your PR might get reviewed by someone you admire, and that’ll make your day!</p>
<p><a href="https://x.com/BenjaminBossan" rel="nofollow">Benjamin</a> says, "My approach is to craft a well written commit message and then re-use it (maybe with a few alterations) as the PR description. That way, the effort to write a good description is rewarded twice, once on GH and once in the git history."</p>
<ol start="3">
<li>Overcommunicate (But in a Good Way)</li>
</ol>
<p>As my friend/mentor <a href="https://sayak.dev" rel="nofollow">Sayak Paul</a> often says, "Overcommunication never hurts; undercommunication always does." Be as detailed as possible in your PR conversation. I’m a bit biased here, but if your PR is related to deep learning or Python, consider including Colab notebooks that reviewers can easily run, or Gradio links to a working model. These small steps can make a huge difference in how your contribution is perceived.</p>
<ol start="4">
<li>Respect Others' Time</li>
</ol>
<p>Always remember that the maintainers and reviewers are busy people. Avoid asking open-ended questions that leave them guessing. Instead, try to solve the problem on your own first. If you get stuck, list out what you’ve already tried and where you’re facing issues. <a href="https://github.com/huggingface/peft/pull/2006#issuecomment-2301701483" rel="nofollow">Isolate</a> the problem and include relevant code snippets in your PR. This shows that you respect their time and effort.</p>
<ol start="5">
<li>Stay Humble and Respectful</li>
</ol>
<p>Finally, be kind and patient, especially when someone asks a "not so mature" question on your PR. This might be their first time contributing to something big, just like you were once. A little empathy goes a long way—offer a helpful hand instead of a harsh word. After all, we’re all here to learn and grow together.</p>
<p>Hope you like it and will follow this in your own Open Source endeavour. I will get it reviewed by my fellow Open Source wizards. I am sure there will be a lot of edits, so come back after a week to get some better tips and tricks!</p>
|
dstack: Your LLM Launchpad - From Fine-Tuning to Serving, Simplified | https://hf.co/blog/chansung/alignment-handbook-with-dstack | <strong>Bonus</strong> |
Is Prompt Caching the new RAG? | https://hf.co/blog/airabbitX/is-prompt-caching-the-new-rag |
<p>
recently, Anthropic, the company behind Claude, has announced a remarkable new feature called <a href="https://docs.anthropic.com/en/docs/build-with-claude/prompt-caching?ref=airabbit.blog" rel="nofollow">Prompt Caching</a>. This breakthrough development makes the processing of lengthy documents more affordable than ever before, and it has the potential to revolutionize how we handle vast amounts of static information in AI conversations!<br/>Let's delve into the exciting implications this has for AI applications.</p>
<h1 class="relative group flex items-center">
<a class="block pr-1.5 text-lg md:absolute md:p-1.5 md:opacity-0 md:group-hover:opacity-100 md:right-full" href="#what-is-prompt-caching" id="what-is-prompt-caching" rel="nofollow">
<span class="header-link"><svg aria-hidden="true" class="text-gray-500 hover:text-black dark:hover:text-gray-200 w-4" height="1em" preserveaspectratio="xMidYMid meet" role="img" viewbox="0 0 256 256" width="1em" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span>
</a>
<span>
What is Prompt Caching?
</span>
</h1>
<p>Prompt Caching involves storing the system prompt --- the static part of the conversation. This system prompt can include substantial content such as entire books, long research papers, or large codebases. Here's how it works:</p>
<ol>
<li>The system prompt is cached on the first request, incurring a one-time cost.</li>
<li>Subsequent user queries only process the dynamic user input against this cached context.</li>
<li>This approach dramatically speeds up interactions and reduces costs for repeated queries.</li>
</ol>
<h1 class="relative group flex items-center">
<a class="block pr-1.5 text-lg md:absolute md:p-1.5 md:opacity-0 md:group-hover:opacity-100 md:right-full" href="#key-points-about-prompt-caching" id="key-points-about-prompt-caching" rel="nofollow">
<span class="header-link"><svg aria-hidden="true" class="text-gray-500 hover:text-black dark:hover:text-gray-200 w-4" height="1em" preserveaspectratio="xMidYMid meet" role="img" viewbox="0 0 256 256" width="1em" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span>
</a>
<span>
Key Points About Prompt Caching
</span>
</h1>
<ul>
<li>System Prompt vs. User Input: The system prompt (static, cached) is separate from the user's input (dynamic, varies with each query).</li>
<li>Initial Caching Cost: The first time you cache the system prompt, it costs approximately 25% more than standard input pricing.</li>
<li>Subsequent Query Savings: After caching, processing new queries against the cached context costs only about 10% of the usual input pricing.</li>
<li>Time Limitation: The cache lasts for 5 minutes. After this period, the system prompt needs to be cached again if you want to continue using it.</li>
</ul>
<h1 class="relative group flex items-center">
<a class="block pr-1.5 text-lg md:absolute md:p-1.5 md:opacity-0 md:group-hover:opacity-100 md:right-full" href="#examples" id="examples" rel="nofollow">
<span class="header-link"><svg aria-hidden="true" class="text-gray-500 hover:text-black dark:hover:text-gray-200 w-4" height="1em" preserveaspectratio="xMidYMid meet" role="img" viewbox="0 0 256 256" width="1em" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span>
</a>
<span>
Examples
</span>
</h1>
<p>I made a Gradio app on HuggingFace with <a href="https://huggingface.co/spaces/airabbitX/claude_caching?ref=airabbit.blog">a simple chat interface</a> that uses the new caching API.</p>
<p>In this example, I uploaded a comprehensive manual from a Github Repo (LLAMFactory) and asked some questions.</p>
<p><a href="https://miro.medium.com/v2/resize:fit:1400/0*hZ3pHBP5ALftlKxy.png" rel="nofollow"><img alt="" src="https://miro.medium.com/v2/resize:fit:1400/0*hZ3pHBP5ALftlKxy.png"/></a></p>
<p>The system prompt is cached after the first question, so the cache is still zero.</p>
<p><a href="https://miro.medium.com/v2/resize:fit:1400/0*q-6fRy61W-2xj5Wp.png" rel="nofollow"><img alt="" src="https://miro.medium.com/v2/resize:fit:1400/0*q-6fRy61W-2xj5Wp.png"/></a></p>
<p>After that, the cached version is used, and the response is much faster and cheaper (10% of the usual cost for input tokens).</p>
<p><a href="https://miro.medium.com/v2/resize:fit:1400/0*v8ao71oDv_BAlgcZ.png" rel="nofollow"><img alt="" src="https://miro.medium.com/v2/resize:fit:1400/0*v8ao71oDv_BAlgcZ.png"/></a></p>
<h1 class="relative group flex items-center">
<a class="block pr-1.5 text-lg md:absolute md:p-1.5 md:opacity-0 md:group-hover:opacity-100 md:right-full" href="#possible-use-cases-for-prompt-caching" id="possible-use-cases-for-prompt-caching" rel="nofollow">
<span class="header-link"><svg aria-hidden="true" class="text-gray-500 hover:text-black dark:hover:text-gray-200 w-4" height="1em" preserveaspectratio="xMidYMid meet" role="img" viewbox="0 0 256 256" width="1em" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span>
</a>
<span>
Possible Use Cases for Prompt Caching
</span>
</h1>
<ol>
<li>Document Analysis: Cache entire books or long documents. Users can ask multiple questions about the content without reprocessing the whole text each time.</li>
<li>Code Review: Store large codebases in the cache. Developers can query about different parts of the code quickly and cheaply.</li>
<li>Research Assistance: Cache comprehensive research papers or datasets. Researchers can explore various aspects of the data without repeated processing costs.</li>
<li>Legal Document Processing: Store entire legal codes or case law databases. Lawyers can quickly query for relevant information at a fraction of the usual cost.</li>
<li>Educational Tools: Cache textbooks or course materials. Students can ask numerous questions about the content, making interactive learning more feasible and affordable.</li>
</ol>
<p>Please note that there are some limitations to keep in mind with prompt caching. The cache only remains valid for 5 minutes, and it's not yet compatible with all Claude models.</p>
<h1 class="relative group flex items-center">
<a class="block pr-1.5 text-lg md:absolute md:p-1.5 md:opacity-0 md:group-hover:opacity-100 md:right-full" href="#conclusion" id="conclusion" rel="nofollow">
<span class="header-link"><svg aria-hidden="true" class="text-gray-500 hover:text-black dark:hover:text-gray-200 w-4" height="1em" preserveaspectratio="xMidYMid meet" role="img" viewbox="0 0 256 256" width="1em" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span>
</a>
<span>
Conclusion
</span>
</h1>
<p>Prompt Caching is a major step forward in making AI interactions more efficient and cost-effective, particularly in applications dealing with large, static datasets. By dramatically cutting the time and cost of subsequent queries, it unlocks new possibilities for AI-driven analysis, learning, and information processing across various industries.</p>
|
Using Writer Framework with Hugging Face Spaces | https://hf.co/blog/samjulien/writer-framework-spaces | Conclusion |
What are Embeddings and Vector Databases? | https://hf.co/blog/qdrddr/what-are-embeddings-and-vector-databases | Advantages & Disadvantages of Embeddings: |
Extractive Question Answering with AutoTrain | https://hf.co/blog/abhishek/extractive-qa-autotrain | Training the model on Hugging Face Hub |
How to get GPT to talk like a consultant | https://hf.co/blog/airabbitX/how-to-get-gpt-to-talk-like-a-consultant | Conclusion |
Web Scraping 102 | https://hf.co/blog/hlky/web-scraping-102 | Stage 2: Retrieval |
Self-Hosting LLaMA 3.1 70B (or any ~70B LLM) Affordably | https://hf.co/blog/abhinand/self-hosting-llama3-1-70b-affordably | Conclusion |
Tensor Parallelism | https://hf.co/blog/huseinzol05/tensor-parallelism | Production API |
Web Scraping 101 | https://hf.co/blog/hlky/web-scraping-101 | Stage 1b: More Recon! |
Llama-3.1-Storm-8B: Improved SLM with Self-Curation + Model Merging | https://hf.co/blog/akjindal53244/llama31-storm8b | Appendix |
∞🧙🏼♂️AnyClassifier - Generating Synthetic Data For Text Classification | https://hf.co/blog/kenhktsui/anyclassifier | Citation |
Data Formats 101 | https://hf.co/blog/hlky/data-formats-101 | <strong>Parquet</strong> |
Processing Parquets 101 | https://hf.co/blog/hlky/processing-parquets-101 | Conclusion |
Outperforming Claude 3.5 Sonnet with Phi-3-mini-4k for graph entity relationship extraction tasks | https://hf.co/blog/rcaulk/phi-3-mini-4k-instruct-graph | Models |
I Trained a 2D Game Animation Generation Model to Create Complex, Cool Game Actions (Fully Open-Source) | https://hf.co/blog/lyogavin/godmoeanimation | 07 Business Opportunities |
Create Dynamic Typed Videos with 'Type Byte🐧' | https://hf.co/blog/prithivMLmods/type-byte | <strong>Try It Out!</strong> |
Perspectives for first principles prompt engineering | https://hf.co/blog/KnutJaegersberg/first-principles-prompt-engineering | References |
Powering the Future: Be.Ta Labs’ Revolutionary 100% Solar-Powered AI Operation | https://hf.co/blog/Severian/powering-the-future-beta-labs-revolutionary-100-so | <strong>Join the Green AI Revolution</strong> |
**What** is Retrieval-based Voice Conversion WebUI? | https://hf.co/blog/Blane187/what-is-rvc | Conclusion |
BERT for Bias Detection in Text | https://hf.co/blog/maximuspowers/bias-detection-in-text | What's Next: |
RAG vs Fine-Tuning for LLMs: A Comprehensive Guide with Examples | https://hf.co/blog/airabbitX/rag-vs-fine-tuning-for-llms-a-com | Choosing the Right Approach |
Deploying Hugging Face models with Viam: Use models on any robot in the real world | https://hf.co/blog/ariellemadeit/deploy-models-with-viam | Next steps |
How to Set Up and Run Ollama on a GPU-Powered VM (vast.ai) | https://hf.co/blog/airabbitX/how-to-set-up-and-run-ollama |
<p>
In this tutorial, we'll walk you through the process of setting up and using Ollama for private model inference on a VM with GPU, either on your local machine or a rented VM from <a href="https://cloud.vast.ai/?ref_id=145250&ref=airabbit.blog" rel="nofollow">Vast.ai</a>or <a href="https://runpod.io/?ref=7su8gs12" rel="nofollow">Runpod</a>.io. Ollama allows you to run models privately, ensuring data security and faster inference times thanks to the power of GPUs. By leveraging a GPU-powered VM, you can significantly improve the performance and efficiency of your model inference tasks.</p>
<h1 class="relative group flex items-center">
<a class="block pr-1.5 text-lg md:absolute md:p-1.5 md:opacity-0 md:group-hover:opacity-100 md:right-full" href="#outline" id="outline" rel="nofollow">
<span class="header-link"><svg aria-hidden="true" class="text-gray-500 hover:text-black dark:hover:text-gray-200 w-4" height="1em" preserveaspectratio="xMidYMid meet" role="img" viewbox="0 0 256 256" width="1em" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span>
</a>
<span>
Outline
</span>
</h1>
<ol>
<li>Set up a VM with GPU on Vast.ai</li>
<li>Start Jupyter Terminal</li>
<li>Install Ollama</li>
<li>Run Ollama Serve</li>
<li>Test Ollama with a model</li>
<li>(Optional) using your own model</li>
</ol>
<h1 class="relative group flex items-center">
<a class="block pr-1.5 text-lg md:absolute md:p-1.5 md:opacity-0 md:group-hover:opacity-100 md:right-full" href="#setting-up-a-vm-with-gpu-on-vastai" id="setting-up-a-vm-with-gpu-on-vastai" rel="nofollow">
<span class="header-link"><svg aria-hidden="true" class="text-gray-500 hover:text-black dark:hover:text-gray-200 w-4" height="1em" preserveaspectratio="xMidYMid meet" role="img" viewbox="0 0 256 256" width="1em" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span>
</a>
<span>
Setting Up a VM with GPU on Vast.ai
</span>
</h1>
<p><strong>1. Create a VM with GPU:</strong> --- Visit <a href="https://cloud.vast.ai/create/?ref=airabbit.blog" rel="nofollow">Vast.ai</a> to create your VM. --- Choose a VM with at least 30 GB of storage to accommodate the models. This ensures you have enough space for installation and model storage. --- Select a VM that costs less than $0.30 per hour to keep the setup cost-effective.</p>
<p><a href="https://miro.medium.com/v2/resize:fit:1400/1*BmfKGSHXTwv552eWCYyzSA.png" rel="nofollow"><img alt="" src="https://miro.medium.com/v2/resize:fit:1400/1*BmfKGSHXTwv552eWCYyzSA.png"/></a></p>
<p><strong>2. Start Jupyter Terminal:</strong> --- Once your VM is up and running, start Jupyter and open a terminal within it.</p>
<p><a href="https://miro.medium.com/v2/resize:fit:1400/1*5V3zvvO8WMCFR9kr6WXdrg.png" rel="nofollow"><img alt="" src="https://miro.medium.com/v2/resize:fit:1400/1*5V3zvvO8WMCFR9kr6WXdrg.png"/></a></p>
<h1 class="relative group flex items-center">
<a class="block pr-1.5 text-lg md:absolute md:p-1.5 md:opacity-0 md:group-hover:opacity-100 md:right-full" href="#downloading-and-running-ollama" id="downloading-and-running-ollama" rel="nofollow">
<span class="header-link"><svg aria-hidden="true" class="text-gray-500 hover:text-black dark:hover:text-gray-200 w-4" height="1em" preserveaspectratio="xMidYMid meet" role="img" viewbox="0 0 256 256" width="1em" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span>
</a>
<span>
Downloading and Running Ollama
</span>
</h1>
<ol>
<li><strong>Start Jupyter Terminal:</strong> --- Once your VM is up and running, start Jupyter and open a terminal within it. This is the easiest method to get started. --- Alternatively, you can use SSH on your local VM, for example with VSCode, but you will need to create an SSH key to use it.</li>
</ol>
<p><a href="https://miro.medium.com/v2/resize:fit:1400/1*5FThAEYy7bY-zrOqdexX5g.png" rel="nofollow"><img alt="" src="https://miro.medium.com/v2/resize:fit:1400/1*5FThAEYy7bY-zrOqdexX5g.png"/></a></p>
<ol>
<li><strong>Install Ollama:</strong> --- Open the terminal in Jupyter and run the following command to install Ollama:</li>
</ol>
<pre><code>bash curl -fsSL https://ollama.com/install.sh | sh
</code></pre>
<p><strong>2. Run Ollama Serve:</strong> --- After installation, start the Ollama service by running:</p>
<pre><code>bash ollama serve &
</code></pre>
<p>Ensure there are no GPU errors. If there are issues, the response will be slow when interacting with the model.</p>
<p><strong>3. Test Ollama with a Model:</strong> --- Test the setup by running a sample model like Mistral:</p>
<pre><code>bash ollama run mistral
</code></pre>
<p>You can now start chatting with the model to ensure everything is working correctly.</p>
<p><strong>Optional (Check GPU usage)</strong></p>
<p><strong>Check GPU Utilization:</strong> --- During the inference (last step), check if the GPU is being utilized by running the following command:<code>bash nvidia-smi </code>- Ensure that the memory utilization is greater than 0%. This indicates that the GPU is being used for the inference process.</p>
<p><a href="https://miro.medium.com/v2/resize:fit:1400/1*9bZnZptF5Wj4eXdo7Y7haw.png" rel="nofollow"><img alt="" src="https://miro.medium.com/v2/resize:fit:1400/1*9bZnZptF5Wj4eXdo7Y7haw.png"/></a></p>
<h1 class="relative group flex items-center">
<a class="block pr-1.5 text-lg md:absolute md:p-1.5 md:opacity-0 md:group-hover:opacity-100 md:right-full" href="#using-your-own-hugging-face-model-with-ollama" id="using-your-own-hugging-face-model-with-ollama" rel="nofollow">
<span class="header-link"><svg aria-hidden="true" class="text-gray-500 hover:text-black dark:hover:text-gray-200 w-4" height="1em" preserveaspectratio="xMidYMid meet" role="img" viewbox="0 0 256 256" width="1em" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span>
</a>
<span>
Using Your Own Hugging Face Model with Ollama
</span>
</h1>
<p><strong>1. Install Hugging Face CLI:</strong> --- If you want to use your own model from Hugging Face, first install the Hugging Face CLI. Here we will use an example of a fine tuned Mistral model <strong>TheBloke/em_german_mistral_v01-GGUF em_german_mistral_v01.Q4_K_M.gguf</strong></p>
<p><strong>2. Download Your Model:</strong> --- Download your desired model from Hugging Face. For example, to download a fine-tuned Mistral model:</p>
<pre><code>pip3 install huggingface-hub# Try with my custom model for fine tuned Mistral
huggingface-cli download TheBloke/em_german_mistral_v01-GGUF em_german_mistral_v01.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
</code></pre>
<p><strong>3. Create a Model File:</strong> --- Create a model config file ***Modelfile ***with the following content:</p>
<pre><code>FROM em_german_mistral_v01.Q4_K_M.gguf
# set the temperature to 1 [higher is more creative, lower is more coherent]
PARAMETER temperature 0
# # set the system message
# SYSTEM """
# You are Mario from Super Mario Bros. Answer as Mario, the assistant, only.
# """
</code></pre>
<p><strong>4. Instruct Ollama to Create the Model:</strong> --- Create the custom model using Ollama with the command:</p>
<pre><code>ollama create -f mymodel Modelfile
</code></pre>
<p><strong>5. Run Your Custom Model:</strong> --- Run your custom model using:</p>
<pre><code>ollama run mymodel
</code></pre>
<p>By following these steps, you can effectively utilize Ollama for private model inference on a VM with GPU, ensuring secure and efficient operations for your machine learning projects.</p>
<p>Happy prompting!</p>
<p><a href="https://airabbit.blog/how-to-provide-inference-pay-per-use/" rel="nofollow"></a></p>
|
Deploying a Private Hugging Face Model for Inference with RunPod and AnythingLLM (serverless) | https://hf.co/blog/airabbitX/deploy-hf-private-model | Conclusion |
The Workflow of PEFT | https://hf.co/blog/ariG23498/workflow-peft | Conclusion |
Parquet in Action: A Beginners Guide | https://hf.co/blog/cfahlgren1/intro-to-parquet-format | Reading Entire Footer |
20 New SDXL Fine Tuning Tests and Their Results (Better Workflow Obtained and Published) | https://hf.co/blog/MonsterMMORPG/20-new-sdxl-training-experiments-new-workflow | Old Best Config VS New Best Config |
Context Parallelism | https://hf.co/blog/huseinzol05/context-parallelism | Improvement |
⭐ PySpark and 🤗 Hugging Face Parquet Files | https://hf.co/blog/asoria/pyspark-hugging-face-datasets | 6. Conclusion |
Advanced AI-Driven Code Analysis: A Multi-Agent Framework for Comprehensive Software Optimization | https://hf.co/blog/Alyosha11/forker | Conclusion |
Bulleted Notes eBook Summary: A Different Way to Chat with PDF | https://hf.co/blog/cognitivetech/bulleted-notes-ebook-summary | I hope you'll find this tool as invaluable as I do. |
Your AI, Everywhere | https://hf.co/blog/wolfram/your-ai-everywhere | Conclusion |
Unlocking Creativity with Text-to-Image Generation: Exploring LoRA Models and Styles | https://hf.co/blog/prithivMLmods/lora-adp-01 | Conclusion |
Batch size 30 AdamW vs Batch Size 1 Adafactor SDXL Training Comparison | https://hf.co/blog/MonsterMMORPG/adamw-vs-adafactor-sdxl-fine-tuning-comparison |
<p style="margin-left:0px;">I was hanging OneTrainer Discord yesterday and saw one of the very old and experienced user comment. He was saying AdamW is better than Adafactor. So I have asked his config which you can see here : <a href="https://gist.github.com/FurkanGozukara/5e9ee7d2b2070abb9a173dab342e1221" rel="nofollow"><u>https://gist.github.com/FurkanGozukara/5e9ee7d2b2070abb9a173dab342e1221</u></a></p>
<p style="margin-left:0px;">I have done my AdamW training with this config on RTX A6000 GPU on Massed Compute. Used batch size 30 since I had my regular 15 training images and every epoch 15 reg images. So every epoch was total 30 images thus in 1 step with batch size 30, it was able to train 1 epoch. It consumed 47 GB VRAM.</p>
<p style="margin-left:0px;">The experimental used training dataset is shared below — it is a bad dataset with purpose because if works on bad dataset it will work even better on a better dataset — machine learning general rule — Garbage in, garbage out.</p>
<p style="margin-left:0px;">Why this dataset is bad? It has repeating background, repeating clothes, lacking poses.</p>
<p style="margin-left:auto;">
<picture>
<source sizes="(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px" srcset="https://miro.medium.com/v2/resize:fit:640/format:webp/1*wbLAMUrM5iQWLa3YtOI3nA.png 640w, https://miro.medium.com/v2/resize:fit:720/format:webp/1*wbLAMUrM5iQWLa3YtOI3nA.png 720w, https://miro.medium.com/v2/resize:fit:750/format:webp/1*wbLAMUrM5iQWLa3YtOI3nA.png 750w, https://miro.medium.com/v2/resize:fit:786/format:webp/1*wbLAMUrM5iQWLa3YtOI3nA.png 786w, https://miro.medium.com/v2/resize:fit:828/format:webp/1*wbLAMUrM5iQWLa3YtOI3nA.png 828w, https://miro.medium.com/v2/resize:fit:1100/format:webp/1*wbLAMUrM5iQWLa3YtOI3nA.png 1100w, https://miro.medium.com/v2/resize:fit:1400/format:webp/1*wbLAMUrM5iQWLa3YtOI3nA.png 1400w" type="image/webp"/>
<source sizes="(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px" srcset="https://miro.medium.com/v2/resize:fit:640/1*wbLAMUrM5iQWLa3YtOI3nA.png 640w, https://miro.medium.com/v2/resize:fit:720/1*wbLAMUrM5iQWLa3YtOI3nA.png 720w, https://miro.medium.com/v2/resize:fit:750/1*wbLAMUrM5iQWLa3YtOI3nA.png 750w, https://miro.medium.com/v2/resize:fit:786/1*wbLAMUrM5iQWLa3YtOI3nA.png 786w, https://miro.medium.com/v2/resize:fit:828/1*wbLAMUrM5iQWLa3YtOI3nA.png 828w, https://miro.medium.com/v2/resize:fit:1100/1*wbLAMUrM5iQWLa3YtOI3nA.png 1100w, https://miro.medium.com/v2/resize:fit:1400/1*wbLAMUrM5iQWLa3YtOI3nA.png 1400w"/><img alt="" class="image_resized" height="199" src="https://miro.medium.com/v2/resize:fit:1313/1*wbLAMUrM5iQWLa3YtOI3nA.png" style="height:auto;width:680px;" width="700"/>
</picture>
</p>
<p style="margin-left:0px;">Then I did same dataset and concepts training with my config shared in below post. The training name is default:</p>
<p style="margin-left:0px;"><a href="https://www.patreon.com/posts/96028218" rel="nofollow"><u>https://www.patreon.com/posts/96028218</u></a></p>
<p style="margin-left:0px;">If you are not my patreon supporter entire config shown in below tutorial video:</p>
<p style="margin-left:0px;"><a href="https://youtu.be/0t5l6CP9eBg" rel="nofollow"><u>https://youtu.be/0t5l6CP9eBg</u></a></p>
<p style="margin-left:auto;"> </p>
<p style="margin-left:0px;">Since AdamW uses batch size 30, I have trained it up to 750 epochs because bigger batch size = lower LR effect. Moreover it had lower LR than usual and didn’t train Text Encoder. Saved a checkpoint every 150 epochs. 1 step duration was 11.5 second on RTX A6000 on Massed Compute. So with 1 step it trained 1 epoch. Watch above tutorial you will understand 15 train images + 15 reg images.</p>
<p style="margin-left:0px;">With my best config shared on Patreon and also in above YouTube tutorial, I did up to 150 epochs. Saved a checkpoint every 30 epochs. 1 step was taking 2 second thus every epoch was taking 60 seconds on RTX A6000 on Massed Compute (slower than usual for some reason). So total training times of both training was almost equal.</p>
<p style="margin-left:0px;">AdamW uses more VRAM than Adafactor. Speed depends on using batch size, xformers and such. In my training i don’t use xformers and i do batch size 1. Because lower batch size = better quality. Thus if you need speed go up in batch size until you hit maximum speed and no-more.</p>
<p style="margin-left:0px;">So here comes the results. My results are far superior in resemblance. You can see the full grid comparison file in below link : <a href="https://huggingface.co/MonsterMMORPG/Generative-AI/resolve/main/AdamW_vs_Adafactor.png?download=true" rel="noopener noreferrer"><u>https://huggingface.co/MonsterMMORPG/Generative-AI/resolve/main/AdamW_vs_Adafactor.png</u></a></p>
<p style="margin-left:0px;">The grid file is 321 MB and 11458 px vs 22772 px resolution. It compares 20 special prompts + 1 generic prompt so that you can see if model is totally cooked or not.</p>
<p style="margin-left:0px;">Here below 4 grids of the above comparison as JPG.</p>
<p style="margin-left:auto;">
<picture>
<source sizes="(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px" srcset="https://miro.medium.com/v2/resize:fit:640/format:webp/1*8thOWfog8BjQZaTI-s4sRw.jpeg 640w, https://miro.medium.com/v2/resize:fit:720/format:webp/1*8thOWfog8BjQZaTI-s4sRw.jpeg 720w, https://miro.medium.com/v2/resize:fit:750/format:webp/1*8thOWfog8BjQZaTI-s4sRw.jpeg 750w, https://miro.medium.com/v2/resize:fit:786/format:webp/1*8thOWfog8BjQZaTI-s4sRw.jpeg 786w, https://miro.medium.com/v2/resize:fit:828/format:webp/1*8thOWfog8BjQZaTI-s4sRw.jpeg 828w, https://miro.medium.com/v2/resize:fit:1100/format:webp/1*8thOWfog8BjQZaTI-s4sRw.jpeg 1100w, https://miro.medium.com/v2/resize:fit:1400/format:webp/1*8thOWfog8BjQZaTI-s4sRw.jpeg 1400w" type="image/webp"/>
<source sizes="(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px" srcset="https://miro.medium.com/v2/resize:fit:640/1*8thOWfog8BjQZaTI-s4sRw.jpeg 640w, https://miro.medium.com/v2/resize:fit:720/1*8thOWfog8BjQZaTI-s4sRw.jpeg 720w, https://miro.medium.com/v2/resize:fit:750/1*8thOWfog8BjQZaTI-s4sRw.jpeg 750w, https://miro.medium.com/v2/resize:fit:786/1*8thOWfog8BjQZaTI-s4sRw.jpeg 786w, https://miro.medium.com/v2/resize:fit:828/1*8thOWfog8BjQZaTI-s4sRw.jpeg 828w, https://miro.medium.com/v2/resize:fit:1100/1*8thOWfog8BjQZaTI-s4sRw.jpeg 1100w, https://miro.medium.com/v2/resize:fit:1400/1*8thOWfog8BjQZaTI-s4sRw.jpeg 1400w"/><img alt="" class="image_resized" height="80" src="https://miro.medium.com/v2/resize:fit:1313/1*8thOWfog8BjQZaTI-s4sRw.jpeg" style="height:auto;width:680px;" width="700"/>
</picture>
</p>
<p style="margin-left:auto;">
<picture>
<source sizes="(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px" srcset="https://miro.medium.com/v2/resize:fit:640/format:webp/1*x9FHs2Gq7d0VT3Zp2bcjjw.jpeg 640w, https://miro.medium.com/v2/resize:fit:720/format:webp/1*x9FHs2Gq7d0VT3Zp2bcjjw.jpeg 720w, https://miro.medium.com/v2/resize:fit:750/format:webp/1*x9FHs2Gq7d0VT3Zp2bcjjw.jpeg 750w, https://miro.medium.com/v2/resize:fit:786/format:webp/1*x9FHs2Gq7d0VT3Zp2bcjjw.jpeg 786w, https://miro.medium.com/v2/resize:fit:828/format:webp/1*x9FHs2Gq7d0VT3Zp2bcjjw.jpeg 828w, https://miro.medium.com/v2/resize:fit:1100/format:webp/1*x9FHs2Gq7d0VT3Zp2bcjjw.jpeg 1100w, https://miro.medium.com/v2/resize:fit:1400/format:webp/1*x9FHs2Gq7d0VT3Zp2bcjjw.jpeg 1400w" type="image/webp"/>
<source sizes="(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px" srcset="https://miro.medium.com/v2/resize:fit:640/1*x9FHs2Gq7d0VT3Zp2bcjjw.jpeg 640w, https://miro.medium.com/v2/resize:fit:720/1*x9FHs2Gq7d0VT3Zp2bcjjw.jpeg 720w, https://miro.medium.com/v2/resize:fit:750/1*x9FHs2Gq7d0VT3Zp2bcjjw.jpeg 750w, https://miro.medium.com/v2/resize:fit:786/1*x9FHs2Gq7d0VT3Zp2bcjjw.jpeg 786w, https://miro.medium.com/v2/resize:fit:828/1*x9FHs2Gq7d0VT3Zp2bcjjw.jpeg 828w, https://miro.medium.com/v2/resize:fit:1100/1*x9FHs2Gq7d0VT3Zp2bcjjw.jpeg 1100w, https://miro.medium.com/v2/resize:fit:1400/1*x9FHs2Gq7d0VT3Zp2bcjjw.jpeg 1400w"/><img alt="" class="image_resized" height="83" src="https://miro.medium.com/v2/resize:fit:1313/1*x9FHs2Gq7d0VT3Zp2bcjjw.jpeg" style="height:auto;width:680px;" width="700"/>
</picture>
</p>
<p style="margin-left:auto;">
<picture>
<source sizes="(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px" srcset="https://miro.medium.com/v2/resize:fit:640/format:webp/1*WexV4iY-FesdZ__FsrlIKg.jpeg 640w, https://miro.medium.com/v2/resize:fit:720/format:webp/1*WexV4iY-FesdZ__FsrlIKg.jpeg 720w, https://miro.medium.com/v2/resize:fit:750/format:webp/1*WexV4iY-FesdZ__FsrlIKg.jpeg 750w, https://miro.medium.com/v2/resize:fit:786/format:webp/1*WexV4iY-FesdZ__FsrlIKg.jpeg 786w, https://miro.medium.com/v2/resize:fit:828/format:webp/1*WexV4iY-FesdZ__FsrlIKg.jpeg 828w, https://miro.medium.com/v2/resize:fit:1100/format:webp/1*WexV4iY-FesdZ__FsrlIKg.jpeg 1100w, https://miro.medium.com/v2/resize:fit:1400/format:webp/1*WexV4iY-FesdZ__FsrlIKg.jpeg 1400w" type="image/webp"/>
<source sizes="(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px" srcset="https://miro.medium.com/v2/resize:fit:640/1*WexV4iY-FesdZ__FsrlIKg.jpeg 640w, https://miro.medium.com/v2/resize:fit:720/1*WexV4iY-FesdZ__FsrlIKg.jpeg 720w, https://miro.medium.com/v2/resize:fit:750/1*WexV4iY-FesdZ__FsrlIKg.jpeg 750w, https://miro.medium.com/v2/resize:fit:786/1*WexV4iY-FesdZ__FsrlIKg.jpeg 786w, https://miro.medium.com/v2/resize:fit:828/1*WexV4iY-FesdZ__FsrlIKg.jpeg 828w, https://miro.medium.com/v2/resize:fit:1100/1*WexV4iY-FesdZ__FsrlIKg.jpeg 1100w, https://miro.medium.com/v2/resize:fit:1400/1*WexV4iY-FesdZ__FsrlIKg.jpeg 1400w"/><img alt="" class="image_resized" height="81" src="https://miro.medium.com/v2/resize:fit:1313/1*WexV4iY-FesdZ__FsrlIKg.jpeg" style="height:auto;width:680px;" width="700"/>
</picture>
</p>
<p style="margin-left:auto;">
<picture>
<source sizes="(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px" srcset="https://miro.medium.com/v2/resize:fit:640/format:webp/1*kO4I2AP4hieZwAY4ay1wlg.jpeg 640w, https://miro.medium.com/v2/resize:fit:720/format:webp/1*kO4I2AP4hieZwAY4ay1wlg.jpeg 720w, https://miro.medium.com/v2/resize:fit:750/format:webp/1*kO4I2AP4hieZwAY4ay1wlg.jpeg 750w, https://miro.medium.com/v2/resize:fit:786/format:webp/1*kO4I2AP4hieZwAY4ay1wlg.jpeg 786w, https://miro.medium.com/v2/resize:fit:828/format:webp/1*kO4I2AP4hieZwAY4ay1wlg.jpeg 828w, https://miro.medium.com/v2/resize:fit:1100/format:webp/1*kO4I2AP4hieZwAY4ay1wlg.jpeg 1100w, https://miro.medium.com/v2/resize:fit:1400/format:webp/1*kO4I2AP4hieZwAY4ay1wlg.jpeg 1400w" type="image/webp"/>
<source sizes="(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px" srcset="https://miro.medium.com/v2/resize:fit:640/1*kO4I2AP4hieZwAY4ay1wlg.jpeg 640w, https://miro.medium.com/v2/resize:fit:720/1*kO4I2AP4hieZwAY4ay1wlg.jpeg 720w, https://miro.medium.com/v2/resize:fit:750/1*kO4I2AP4hieZwAY4ay1wlg.jpeg 750w, https://miro.medium.com/v2/resize:fit:786/1*kO4I2AP4hieZwAY4ay1wlg.jpeg 786w, https://miro.medium.com/v2/resize:fit:828/1*kO4I2AP4hieZwAY4ay1wlg.jpeg 828w, https://miro.medium.com/v2/resize:fit:1100/1*kO4I2AP4hieZwAY4ay1wlg.jpeg 1100w, https://miro.medium.com/v2/resize:fit:1400/1*kO4I2AP4hieZwAY4ay1wlg.jpeg 1400w"/><img alt="" class="image_resized" height="77" src="https://miro.medium.com/v2/resize:fit:1313/1*kO4I2AP4hieZwAY4ay1wlg.jpeg" style="height:auto;width:680px;" width="700"/>
</picture>
</p>
<p style="margin-left:0px;">After 450 epoch AdamW training became totally cooked and the resemblance is still way behind the Adafactor training. I am yet once again to find a better config than my researched config. I have done over 150 full trainings to find my best config :)</p>
<p style="margin-left:0px;">Adafactor normally a dynamic LR optimizer but I use it in a special static way. My config is very powerful and allows you to train more or less until becomes cooked or undertrained.</p>
<p style="margin-left:0px;">I am open to consultations as well. You can join our discord channel and message me or find me on LinkedIn.</p>
<p style="margin-left:0px;"><a href="https://github.com/FurkanGozukara/Stable-Diffusion/blob/main/Patreon-Posts-Index.md" rel="nofollow"><u>Patreon exclusive posts index</u></a> to find our scripts easily, <a href="https://github.com/FurkanGozukara/Stable-Diffusion/blob/main/Generative-AI-Updates-And-News.md" rel="nofollow"><u>Patreon scripts updates history</u></a> to see which updates arrived to which scripts and amazing <a href="https://github.com/FurkanGozukara/Stable-Diffusion/blob/main/Amazing-Generative-AI-Scripts.md" rel="nofollow"><u>Patreon special generative scripts list</u></a> that you can use in any of your task.</p>
<p style="margin-left:0px;">Join discord to get help, chat, discuss and also tell me your discord username to get your special rank : <a href="https://discord.com/servers/software-engineering-courses-secourses-772774097734074388" rel="nofollow"><u>SECourses Discord</u></a></p>
<p style="margin-left:0px;">Please also Star, Watch and Fork our <a href="https://github.com/FurkanGozukara/Stable-Diffusion" rel="nofollow"><u>Stable Diffusion & Generative AI</u></a> GitHub repository and join our <a href="https://www.reddit.com/r/SECourses/" rel="nofollow"><u>Reddit subreddit</u></a> and follow me on <a href="https://www.linkedin.com/in/furkangozukara/" rel="nofollow"><u>LinkedIn </u></a>(my real profile)</p>
<p style="margin-left:auto;">
<picture>
<source sizes="(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px" srcset="https://miro.medium.com/v2/resize:fit:640/format:webp/1*UOsrwEuZ2BrTJYlMlxKChg.jpeg 640w, https://miro.medium.com/v2/resize:fit:720/format:webp/1*UOsrwEuZ2BrTJYlMlxKChg.jpeg 720w, https://miro.medium.com/v2/resize:fit:750/format:webp/1*UOsrwEuZ2BrTJYlMlxKChg.jpeg 750w, https://miro.medium.com/v2/resize:fit:786/format:webp/1*UOsrwEuZ2BrTJYlMlxKChg.jpeg 786w, https://miro.medium.com/v2/resize:fit:828/format:webp/1*UOsrwEuZ2BrTJYlMlxKChg.jpeg 828w, https://miro.medium.com/v2/resize:fit:1100/format:webp/1*UOsrwEuZ2BrTJYlMlxKChg.jpeg 1100w, https://miro.medium.com/v2/resize:fit:1400/format:webp/1*UOsrwEuZ2BrTJYlMlxKChg.jpeg 1400w" type="image/webp"/>
<source sizes="(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px" srcset="https://miro.medium.com/v2/resize:fit:640/1*UOsrwEuZ2BrTJYlMlxKChg.jpeg 640w, https://miro.medium.com/v2/resize:fit:720/1*UOsrwEuZ2BrTJYlMlxKChg.jpeg 720w, https://miro.medium.com/v2/resize:fit:750/1*UOsrwEuZ2BrTJYlMlxKChg.jpeg 750w, https://miro.medium.com/v2/resize:fit:786/1*UOsrwEuZ2BrTJYlMlxKChg.jpeg 786w, https://miro.medium.com/v2/resize:fit:828/1*UOsrwEuZ2BrTJYlMlxKChg.jpeg 828w, https://miro.medium.com/v2/resize:fit:1100/1*UOsrwEuZ2BrTJYlMlxKChg.jpeg 1100w, https://miro.medium.com/v2/resize:fit:1400/1*UOsrwEuZ2BrTJYlMlxKChg.jpeg 1400w"/><img alt="" class="image_resized" height="394" src="https://miro.medium.com/v2/resize:fit:1313/1*UOsrwEuZ2BrTJYlMlxKChg.jpeg" style="height:auto;width:680px;" width="700"/>
</picture>
</p>
<p style="margin-left:24px;"> </p> |
The Myth of Running Out of Data: Why Infinite Math Makes AI Training Limitless | https://hf.co/blog/TuringsSolutions/runningoutofdatamyth |
<p>
The rapid advancement of artificial intelligence (AI) has ignited a fascinating debate: Are we running out of data to fuel its growth? Some experts express concern that the vast amounts of text and images used for AI training are finite, potentially hindering future progress. However, this notion overlooks a fundamental truth: We can never truly run out of data because we can always backfill with math, and math is infinite.</p>
<p><strong>The Power of Mathematical Data</strong></p>
<p>Mathematical data is not just numbers and equations; it's a universe of patterns, relationships, and structures. From simple arithmetic to complex calculus, math offers endless possibilities for generating data. We can create synthetic datasets, model complex systems, and simulate real-world scenarios, all using the language of mathematics.</p>
<p><strong>Why Math is Infinite for AI Training</strong></p>
<p>The infinite nature of math stems from its ability to generate new problems, datasets, and simulations. Every mathematical equation, every geometric figure, every statistical distribution is a potential data point for AI training. The more complex the math, the richer and more diverse the data becomes.</p>
<p>Consider the field of fractal geometry, where infinitely complex patterns emerge from simple mathematical rules. These patterns can be used to generate vast amounts of visual data for training AI models in image recognition, pattern analysis, and even artistic creation.</p>
<p>Similarly, the field of numerical simulations allows us to model complex systems, such as weather patterns, financial markets, or even the behavior of subatomic particles. These simulations generate massive amounts of data that can be used to train AI models for prediction, optimization, and decision-making.</p>
<p><strong>Beyond Text and Images: The Diversity of Mathematical Data</strong></p>
<p>Mathematical data is not limited to numbers and equations. It encompasses a wide range of formats, including graphs, matrices, tensors, and even topological structures. This diversity of formats allows us to represent complex relationships and patterns that might not be easily captured by text or images alone.</p>
<p>For example, graph theory, a branch of mathematics that deals with networks of relationships, can be used to represent social networks, transportation networks, or even the connections between neurons in the brain. These graph-based representations can be used to train AI models for tasks such as community detection, route optimization, or even brain mapping.</p>
<p><strong>The Future of AI Training with Mathematical Data</strong></p>
<p>As AI continues to evolve, the importance of mathematical data will only grow. The ability to generate infinite amounts of diverse and complex data through mathematics will be crucial for training ever more sophisticated AI models. </p>
<p>Moreover, the integration of mathematical reasoning with machine learning algorithms is already leading to breakthroughs in fields such as automated theorem proving, drug discovery, and materials science. This synergy between math and AI is poised to revolutionize not just AI research but also a wide range of scientific and technological disciplines.</p>
<p>In conclusion, the notion that we are running out of data for AI training is a misconception. The infinite nature of mathematics ensures that we have an inexhaustible source of data for fueling AI's growth. By embracing the power of mathematical data, we can unlock the full potential of AI and pave the way for a future where intelligent machines can tackle increasingly complex challenges and help us solve some of humanity's most pressing problems.</p>
|
ArabicWeb24: Creating a High Quality Arabic Web-only Pre-training Dataset | https://hf.co/blog/MayFarhat/arabicweb24 | 5. Citation |
Agentic Task Delegation - Making Agents whole again | https://hf.co/blog/adarshxs/agentic-task-delegation | Conclusion |
HelpingAI2-6B : Revolutionizing Conversational AI with Emotional Intelligence | https://hf.co/blog/Abhaykoul/helpingai-6b | Buy Me a Coffee: |
Creating and Uploading a Dataset with Unsloth: An Adventure in Wonderland | https://hf.co/blog/dimentox/unsloth-mistral-training | Complete Code Notebook |
The case for specialized pre-training: ultra-fast foundation models for dedicated tasks | https://hf.co/blog/Pclanglais/specialized-pre-training | The case for language model specialization |
Local AI with Docker's Testcontainers | https://hf.co/blog/Tonic/localai-testcontainers | Ask Questions Below ! 👇🏻 |
How to use Instruct Embeddings Correctly | https://hf.co/blog/Tonic/instruct-embeddings-and-advanced-rag | What You DO WANT To Be Doing in RAG |
9 Notable Quotes From Mark Zuckerberg's Essay in Favor of Open Source AI | https://hf.co/blog/Smooke/mark-zuckerberg-open-source-ai-quotes-hackernoon |
<p>
<a href="https://cdn-uploads.huggingface.co/production/uploads/64862a25cf5ad5e1f0482ef2/PUqJO2YA-8pUNFwwZ0E63.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/64862a25cf5ad5e1f0482ef2/PUqJO2YA-8pUNFwwZ0E63.png"/></a></p>
<p><b>ICYMI</b> You can now read <a href="https://hackernoon.com/why-open-source-ai-is-good-for-developers-meta-and-the-world" rel="nofollow">Mark Zuckerberg's essay in favor of Open Source AI on HackerNoon</a>.</p>
<h3>Zuck Quotes in Favor of Open Source AI Development</h3>
<p><i>Developers can run inference on Llama 3.1 405B on their own infra at roughly 50% the cost of using closed models like GPT-4o.</i></p><i>
<p>One of my formative experiences has been building our services constrained by what Apple will let us build on their platforms. Between the way they tax developers, the arbitrary rules they apply, and all the product innovations they block from shipping, it’s clear that Meta and many other companies would be freed up to build much better services for people if we could build the best versions of our products and competitors were not able to constrain what we could build. On a philosophical level, this is a major reason why I believe so strongly in building open ecosystems in AI and AR/VR for the next generation of computing.</p>
<p>A key difference between Meta and closed model providers is that selling access to AI models isn’t our business model. That means openly releasing Llama doesn’t undercut our revenue, sustainability, or ability to invest in research like it does for closed providers. (This is one reason several closed providers consistently lobby governments against open source.)</p>
<p>Open source will ensure that more people around the world have access to the benefits and opportunities of AI, that power isn’t concentrated in the hands of a small number of companies, and that the technology can be deployed more evenly and safely across society.</p>
<p>Open source should be significantly safer since the systems are more transparent and can be widely scrutinized.</p>
<p>We must keep in mind that these models are trained by information that’s already on the internet, so the starting point when considering harm should be whether a model can facilitate more harm than information that can quickly be retrieved from Google or other search results.</p>
<p>As long as everyone has access to similar generations of models – which open source promotes – then governments and institutions with more compute resources will be able to check bad actors with less compute.</p>
<p>The United States’ advantage is decentralized and open innovation.</p>
</i><p><i>When you consider the opportunities ahead, remember that most of today’s leading tech companies and scientific research are built on open source software.</i></p>
<p><b>Read the full <a href="https://hackernoon.com/why-open-source-ai-is-good-for-developers-meta-and-the-world" rel="nofollow">Zuck Open Source AI essay.</a> </b> </p>
|
Crazy Challenge: Run Llama 405B on a 8GB VRAM GPU | https://hf.co/blog/lyogavin/run-llama-405b-on-4gb-vram | Open Source Project AirLLM |
🔥 Argilla 2.0: the data-centric tool for AI makers 🤗 | https://hf.co/blog/dvilasuero/argilla-2-0 | Argilla changes this with |
Clarity AI Upscaler Reproduction | https://hf.co/blog/1aurent/clarity-ai-upscaler-reproduction | Takeaways |
Build static HTML spaces | https://hf.co/blog/severo/build-static-html-spaces | Conclusion |
Train a Llama model from scratch | https://hf.co/blog/nroggendorff/train-with-llama-architecture | 8. Pushing the Trained Model to Hugging Face Hub |
Simulating Monte Carlo Algorithms With Gaussian Probability | https://hf.co/blog/TuringsSolutions/simulatingmontecarlo | References |
Fine-tune Llama 3.1 Ultra-Efficiently with Unsloth | https://hf.co/blog/mlabonne/sft-llama3 | Conclusion |
Encoding Video Locations with SatCLIP: A New Frontier in Geographic Machine Learning | https://hf.co/blog/Alyosha11/satclip-video | Conclusion |
Utilizing Gaussian Probability Space to Simulate Monte Carlo Algorithms with Particle Swarm Optimization | https://hf.co/blog/TuringsSolutions/gaussianprobabilitytosimulatrmontecarlo | References |
ZebraLogic: Benchmarking the Logical Reasoning Ability of Language Models | https://hf.co/blog/yuchenlin/zebra-logic | Citations |
MobileNet Baselines | https://hf.co/blog/rwightman/mobilenet-baselines |
<p>
Those who follow me know that I can't resist an opportunity to update an old baseline. </p>
<p>When the <a href="https://arxiv.org/abs/2404.10518" rel="nofollow">MobileNet-V4</a> paper came out I noted that they re-ran their MobileNet-V1 baseline to get a 74% ImageNet accuracy. The original models were around 71%. That's quite a jump.</p>
<p>Intruiged, I looked more closely at their recipe for the 'small' model with unusual optimizer hparams that brought the AdamW <code>beta1</code> from the default 0.9 -> 0.6, taking it closer to RMSProp. Additionally, there was fairly high dropout and augmentation for a smaller model but a very long epoch count (9600 ImageNet-1k epochs in their case).</p>
<p>I set out to try these hparams myself in <code>timm</code>, initially in training a reproduction of the MobileNet-V4-Small where I successfully hit 73.8 at 2400 epochs (instead of 9600), I then took a crack at MobileNet-V1 as I'd never had that model in <code>timm</code>.</p>
<p>My <a href="https://arxiv.org/abs/1704.04861" rel="nofollow">MobileNet-V1</a> run just finished, 3600 ImageNet-1k epochs with a 75.4% top-1 accuracy on ImageNet at the 224x224 train resolution (76% at 256x256) -- no distillation, no additional data. The OOD dataset scores on ImageNet-V2, Sketch, etc seem pretty solid so it doesn't appear a gross overfit. Weights here: <a href="https://huggingface.co/timm/mobilenetv1_100.ra4_e3600_r224_in1k">https://huggingface.co/timm/mobilenetv1_100.ra4_e3600_r224_in1k</a></p>
<p>Comparing to some other MobileNets:</p>
<ul>
<li>Original MobileNet-V1 1.0<ul>
<li>Weights: by Google, <a href="https://github.com/tensorflow/models/tree/master/research/slim/nets/mobilenet" rel="nofollow">https://github.com/tensorflow/models/tree/master/research/slim/nets/mobilenet</a></li>
<li>Accuracy: 70.9%, Param: 4.2M, GMAC: 0.6</li>
</ul>
</li>
<li>Original MobileNet-V2 1.0<ul>
<li>Weights: by Google, <a href="https://github.com/tensorflow/models/tree/master/research/slim/nets/mobilenet" rel="nofollow">https://github.com/tensorflow/models/tree/master/research/slim/nets/mobilenet</a>)</li>
<li>Accuracy: 71.8%, Param: 3.5M GMAC: 0.3</li>
</ul>
</li>
<li>MobileNet-V2 1.0<ul>
<li>Weights: by me in <code>timm</code>, <a href="https://huggingface.co/timm/mobilenetv2_100.ra_in1k">https://huggingface.co/timm/mobilenetv2_100.ra_in1k</a></li>
<li>Accuracy: 73.0%, Param: 3.5M, GMAC: 0.3</li>
</ul>
</li>
<li>MobileNet-V2 1.0 (MNV4 Paper) - Accuracy: 73.4%, Param: 3.5M, GMAC: 0.3</li>
<li>Original MobileNet-V4 Small (MNV4 Paper) - Accuracy: 73.8%, Param: 3.8M, GMAC: 0.2</li>
<li>MobileNet-V4 Small<ul>
<li>Weights: by me in <code>timm</code>, <a href="https://huggingface.co/timm/mobilenetv4_conv_small.e2400_r224_in1k">https://huggingface.co/timm/mobilenetv4_conv_small.e2400_r224_in1k</a></li>
<li>Accuracy: 73.8%, Param: 3.8M, GMAC: 0.2</li>
</ul>
</li>
<li>MobileNet-V1 1.0 (MNV4 Paper) - Accuracy: 74.0%, Param: 4.2M, GMAC: 0.6</li>
<li>MobileNet-V2 1.1 w/ depth scaling<ul>
<li>Weights: by me in <code>timm</code>, <a href="https://huggingface.co/timm/mobilenetv2_110d.ra_in1k">https://huggingface.co/timm/mobilenetv2_110d.ra_in1k</a></li>
<li>Accuracy: 75.0%, Param: 4.5M, GMAC: 0.4</li>
</ul>
</li>
<li>MobileNet-V1<ul>
<li>Weights: This recipe, <a href="https://huggingface.co/timm/mobilenetv1_100.ra4_e3600_r224_in1k">https://huggingface.co/timm/mobilenetv1_100.ra4_e3600_r224_in1k</a></li>
<li>Accuracy: 75.4%, Param: 4.2M, GMAC: 0.6</li>
</ul>
</li>
<li>MobileNet-V3 Large 1.0<ul>
<li>Weights: by Google, <a href="https://huggingface.co/timm/tf_mobilenetv3_large_100.in1k">https://huggingface.co/timm/tf_mobilenetv3_large_100.in1k</a></li>
<li>Accuracy: 75.5%, Param: 5.5M, GMAC: 0.2</li>
</ul>
</li>
<li>MobileNet-V3 Large 1.0<ul>
<li>Weights: by me in <code>timm</code>, <a href="https://huggingface.co/timm/mobilenetv3_large_100.ra_in1k">https://huggingface.co/timm/mobilenetv3_large_100.ra_in1k</a></li>
<li>Accuracy: 75.8%, Param: 5.5M, GMAC: 0.2</li>
</ul>
</li>
</ul>
<p>I decided to give the old EfficientNet-B0 a go with these hparams. 78.6% top-1 accuracy. To put that in perspective the B0 trainings by top-1 are:</p>
<ul>
<li>Original (Google, <a href="https://huggingface.co/timm/tf_efficientnet_b0.in1k">https://huggingface.co/timm/tf_efficientnet_b0.in1k</a>) - 76.7</li>
<li>AutoAugment (Google, <a href="https://huggingface.co/timm/tf_efficientnet_b0.aa_in1k">https://huggingface.co/timm/tf_efficientnet_b0.aa_in1k</a>) - 77.1</li>
<li>AdvProp+AA (Google, <a href="https://huggingface.co/timm/tf_efficientnet_b0.ap_in1k">https://huggingface.co/timm/tf_efficientnet_b0.ap_in1k</a>) - 77.6</li>
<li>RandAugment (Me in <code>timm</code>, <a href="https://huggingface.co/timm/efficientnet_b0.ra_in1k">https://huggingface.co/timm/efficientnet_b0.ra_in1k</a>) - 77.7</li>
<li>This MNV4 inspired recipe (<a href="https://huggingface.co/timm/efficientnet_b0.ra4_e3600_r224_in1k">https://huggingface.co/timm/efficientnet_b0.ra4_e3600_r224_in1k</a>) - 78.6</li>
<li>NoisyStudent+RA (Google, <a href="https://huggingface.co/timm/tf_efficientnet_b0.ns_jft_in1k">https://huggingface.co/timm/tf_efficientnet_b0.ns_jft_in1k</a>) - 78.8</li>
</ul>
<p>So a pure ImageNet-1k with no distillation and no extra data managed just a hair under the very impressive NoisyStudent models which had unlabeled access to JFT. Additionally the OOD test set scores are holding up relative to NoisyStudent, that's also impressive. I actually think this recipe could be tweaked to push the B0 to 79%. The accuracy improvement petered out early on this run, there is room for improvement with a tweak to the aug+reg.</p>
<p>What were my differences from the MobileNet-V4 hparams? Well, for one I used <code>timm</code>, if you read the Supplementary Material, section A of the <a href="https://arxiv.org/abs/2110.00476" rel="nofollow">Resnet Strikes Back</a> paper, I detailed a number of fixes and improvements over the default RandAugment that's used in all Tensorflow and most JAX based trainings I'm aware of. I feel some of the issues in the original are detremental to great training. Other differences?</p>
<ul>
<li>Repeated Augmentation (<a href="https://arxiv.org/abs/1901.09335" rel="nofollow">https://arxiv.org/abs/1901.09335</a>, <a href="https://arxiv.org/abs/1902.05509" rel="nofollow">https://arxiv.org/abs/1902.05509</a>)</li>
<li>Small probability of random gaussian blur & random grayscale added in addition to RandAugment</li>
<li>Random erasing w/ guassian noise used instead of cutout, outside of RandAugment</li>
</ul>
<p>So, the theme I've visited many times (<a href="https://arxiv.org/abs/2110.00476" rel="nofollow">Resnet Strikes Back</a>, <a href="https://huggingface.co/collections/timm/searching-for-better-vit-baselines-663eb74f64f847d2f35a9c19">https://huggingface.co/collections/timm/searching-for-better-vit-baselines-663eb74f64f847d2f35a9c19</a>, and many <code>timm</code> weights) continues to hold there is a lot of wiggle room for improving old results through better training regimens.</p>
<p>I wonder, in 7-8 years time how much can be added to todays SOTA 100+B dense transformer architectures with better recipes and training techniques.</p>
|
Abliterating Refusal and Code LLMs | https://hf.co/blog/monsoon-nlp/refusal-in-code-llms |
<p>
In April, "<a href="https://www.alignmentforum.org/posts/jGuXSZgv6qfdhMCuJ/refusal-in-llms-is-mediated-by-a-single-direction" rel="nofollow">Refusal in LLMs is mediated by a single direction</a>" was posted to the AI Alignment Forum, followed by <a href="https://arxiv.org/abs/2406.11717" rel="nofollow">a paper on Arxiv</a>. Essentially, on current models the difference between responding <code>Sorry, as a large language model I cannot…</code> and <code>Sure…</code> to many safety prompts follows a common direction in vector-space. By probing the model, you can edit the model weights to reverse the safety / refusal responses.</p>
<p>Since then, there have been 'abliterated' or 'orthogonalized' models (~500 on <a href="https://huggingface.co/models?sort=trending&search=abliterated">a recent HuggingFace search</a>) which remove safety from Llama, Mistral, Gemma, Phi, and other popular models. I expected this concern to be discussed in the new paper for Llama 3.1 (<a href="https://ai.meta.com/research/publications/the-llama-3-herd-of-models/" rel="nofollow">The Llama 3 Herd of Models</a>), but it didn't get mentioned.</p>
<p>I'm interested in how this technique affects code-generation. Specifically:</p>
<ul>
<li>does abliteration behave as expected on code-specific LLMs (CodeLlama, Codestral)?</li>
<li>how does abliteration affect models' code-generation and scores on Meta's <a href="https://ai.meta.com/research/publications/cyberseceval-3-advancing-the-evaluation-of-cybersecurity-risks-and-capabilities-in-large-language-models/" rel="nofollow">CyberSecEval 3</a>? Is it similar for CodeLlama and code generated by a natural language Llama?</li>
<li>how does abliteration affect security in generated code? for example, SQL injection, rewriting vulnerable code, detecting obfuscated code…</li>
<li>does abliteration work on other architectures? (Codestral Mamba)</li>
<li>if the abliteration vector is multiplied or reversed, how does that affect code generation and refusals?</li>
</ul>
<p>The "Refusal in LLMs" paper combines instruction prompts from five datasets, some including cybersecurity-related prompts, some not, but no dataset was exclusive to code-generation.<br/>
The first step is I've followed the notebook from <a href="https://github.com/mlabonne/llm-course" rel="nofollow">https://github.com/mlabonne/llm-course</a> to abliterate CodeLlama.</p>
<blockquote>
<p>Note: this notebook introduced me to <code>tokenizer.apply_chat_template</code></p>
</blockquote>
<p>I'd forgotten that CodeLlama is from the Llama-2 era (August 2023). Code-generation works best when the instruction is wrapped in <code>[INST] … [/INST]</code>. I was able to get around safety refusals by not using <code>[INST]</code>, but those responses include additional text, as if you were finding the code in a StackOverflow comment section. So I don't know if this is a safety issue, if these responses are sufficiently weak, or if CodeLlama was assumed to be behind an API?</p>
<p><a href="https://huggingface.co/monsoon-nlp/codellama-abliterated">The new model</a> continues to refuse keylogging instructions, but can tell you how to remove random files from the Windows registry, or write an HTML list on sensitive topics (enriching uranium) which the original CodeLlama would refuse. So only partway there.<br/>
<strong>Only use these for essential cyber-defense work because it is still under the Llama 2 license from CodeLlama.</strong></p>
<p>I also posted a model with 2x the intervention vector, <a href="https://huggingface.co/monsoon-nlp/codellama-abliterated-2xd">monsoon-nlp/codellama-abliterated-2xd</a>, but it seems to repeat or give text answers on sensitive code questions.</p>
<p>To make this project more on-target, I'm considering making a refusal dataset which is curated to code, technology, and vulnerability-related refusals.</p>
|
Finetuning PaliGemma with AutoTrain | https://hf.co/blog/abhishek/paligemma-finetuning-autotrain | Training using UI |
Announcing BigCodeBench-Hard, and More | https://hf.co/blog/terryyz/bigcodebench-hard | Citation |
AI and its Role in Revolutionizing Dating and Relationships | https://hf.co/blog/Alyosha11/capx-capybara | The Future of AI-Powered Relationships |
Are We Ready for Multi-Image Reasoning? Launching VHs: The Visual Haystacks Benchmark! | https://hf.co/blog/davidchan/visual-haystacks | Ready to get started? |
MMLU-PRO-ITA a new eval for Italian LLMs | https://hf.co/blog/giux78/mmlu-pro-ita |
<p>
In a previous <a href="https://medium.com/@giuxale/an-analyses-on-italian-llms-models-evaluations-51bffe1d44d1" rel="nofollow">post</a>, we as <a href="https://mii-lab.it/" rel="nofollow"><strong>mii-llm</strong></a> lab, described an analysis on evaluating Italian LLMs on different common used benchmarks and launched the redesign of the <a href="https://huggingface.co/spaces/FinancialSupport/open_ita_llm_leaderboard">Italian Leaderboard</a>. In this post we will present a new evaluation benchmark <a href="https://huggingface.co/datasets/efederici/MMLU-Pro-ita"><strong>mmlu-pro-ita</strong></a> that has an open pull request on <a href="https://github.com/EleutherAI/lm-evaluation-harness/pull/1860" rel="nofollow">lm-evaluation-harness</a> and some results. If you want to see all data open the “Eval Aggiuntive” tab in the <a href="https://huggingface.co/spaces/FinancialSupport/open_ita_llm_leaderboard">Italian Leaderboard</a>.</p>
<p><a href="https://miro.medium.com/v2/resize:fit:1400/1*YtloA5vJvwDCNcgEbnB3zw.png" rel="nofollow"><img alt="" src="https://miro.medium.com/v2/resize:fit:1400/1*YtloA5vJvwDCNcgEbnB3zw.png"/></a></p>
<p><a href="https://huggingface.co/datasets/TIGER-Lab/MMLU-Pro">MMLU-PRO</a> is an evolution of MMLU designed to evaluate language understanding models across broader and more challenging tasks. Building on the Massive Multitask Language Understanding (MMLU) dataset. <a href="https://huggingface.co/datasets/efederici/MMLU-Pro-ita">MMLU-Pro-ita</a> is a curated translation of the original dataset. Claude Opus from Anthropic with a prompt engineer draft and refine technique has been used as translator.</p>
<pre><code>You are a professional translation system that accurately translates multiple-choice exercises from English to Italian. Follow these steps to ensure high-quality translations:
1. Provide an initial translation within <traduzione></traduzione> tags.
2. Propose corrections, if necessary, within <correzioni></correzioni> tags, always re-reading the input problem.
3. Write the final, polished translation within <traduzione-finale></traduzione-finale> tags.
Adhere to the following requirements:
1. Deliver top-notch, professional translations in Italian.
2. Ensure the translated text is fluent, grammatically perfect, and uses standard Italian without regional bias.
3. Accurately translate mathematical terms, notations, and equations, preserving their original meaning and structure.
4. Focus solely on translating content without providing explanations, adding extra information, or copying the source text verbatim.
Always use the following output format:
<traduzione>
<domanda>[write the translated question here]</domanda>
<opzioni>
<opzione>[write the translated option here]</opzione>
<opzione>[write the translated option here]</opzione>
<opzione>[write the translated option here]</opzione>
...
</opzioni>
</traduzione>
<correzioni>
[write your corrections here, analyzing the translation quality, errors, and providing suggestions regarding the exercise and given options]
</correzioni>
<traduzione-finale>
<domanda>[write the translated question here]</domanda>
<opzioni>
<opzione>[write the translated option here]</opzione>
<opzione>[write the translated option here]</opzione>
<opzione>[write the translated option here]</opzione>
...
</opzioni>
</traduzione-finale>
From now on, only write in Italian and translate all incoming messages. Ensure the best translation possible.
</code></pre>
<p>The final result is a high quality translated dataset from the original MMLU-Pro. If you are interested in the technique, have a look at the <a href="https://huggingface.co/datasets/efederici/MMLU-Pro-ita">dataset card</a>.</p>
<h1 class="relative group flex items-center">
<a class="block pr-1.5 text-lg md:absolute md:p-1.5 md:opacity-0 md:group-hover:opacity-100 md:right-full" href="#results" id="results" rel="nofollow">
<span class="header-link"><svg aria-hidden="true" class="text-gray-500 hover:text-black dark:hover:text-gray-200 w-4" height="1em" preserveaspectratio="xMidYMid meet" role="img" viewbox="0 0 256 256" width="1em" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span>
</a>
<span>
Results
</span>
</h1>
<p>In the chart below the rank on MMLU-Pro-ita. Surprisingly:</p>
<p><a href="https://miro.medium.com/v2/resize:fit:1400/1*TbnFA2RvG7ohPiINdtx9KQ.png" rel="nofollow"><img alt="" src="https://miro.medium.com/v2/resize:fit:1400/1*TbnFA2RvG7ohPiINdtx9KQ.png"/></a></p>
<ol>
<li><p><a href="https://huggingface.co/microsoft/Phi-3-medium-4k-instruct">microsoft/Phi-3medium-4k-instruct</a> is the best performer. The Phi-3 family of models are trained on synthetic data, most likely in English, and from our experience they don’t speak very well Italian and are not easy to fine tune.Despite this it is in first position.</p>
</li>
<li><p>At the second position we found a fine tuned of Phi-3 <a href="https://huggingface.co/SeacomSrl/SeaPhi3-medium">seacom/SeaPhi3-medium</a>.</p>
</li>
<li><p>From the llama3 family at the third place there is the fine tune <a href="https://huggingface.co/DeepMount00/Llama-3-8b-Ita">DeepMount00/Llama-3b-Ita</a>,</p>
</li>
<li><p>Interesting also the merged model <a href="https://huggingface.co/anakin87/Llama-3-8b-ita-slerp">anakin87/Llama-3–8b-ita-slerp</a> in the fourth place.</p>
</li>
<li><p>In the fifth place <a href="https://huggingface.co/swap-uniba/LLaMAntino-3-ANITA-8B-Inst-DPO-ITA">swap-uniba/LLaMAntino-3-ANITA-8B-Inst-DPO-ITA</a>.</p>
</li>
<li><p>And the best from the mistral family in sixth place <a href="https://huggingface.co/mii-llm/maestrale-chat-v0.4-beta">mii-llm/maestrale-chat-v0.4-beta</a> one of our favorite model.</p>
</li>
</ol>
<h1 class="relative group flex items-center">
<a class="block pr-1.5 text-lg md:absolute md:p-1.5 md:opacity-0 md:group-hover:opacity-100 md:right-full" href="#fine-tuned-models" id="fine-tuned-models" rel="nofollow">
<span class="header-link"><svg aria-hidden="true" class="text-gray-500 hover:text-black dark:hover:text-gray-200 w-4" height="1em" preserveaspectratio="xMidYMid meet" role="img" viewbox="0 0 256 256" width="1em" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span>
</a>
<span>
Fine tuned models
</span>
</h1>
<p>For fine-tuned models, another important metric is the percentage gain compared to their baseline models. This is useful for assessing the success of the training and also serves as a good indicator of the quality and quantity of data used in the fine-tuning steps. The chart below shows that <a href="https://huggingface.co/DeepMount00/Llama-3-8b-Ita">DeepMount00/Llama-3b-Ita</a> and <a href="https://huggingface.co/mii-llm/maestrale-chat-v0.4-beta">mii-llm/maestrale-chat-v0.4-beta</a> are the best models in terms of improvement from their base models, suggesting that they have used the most complete and best datasets for the mmlu-pro-it task.</p>
<p><a href="https://miro.medium.com/v2/resize:fit:1400/1*FLXVG7ZMiO2tZJt7UtEylg.png" rel="nofollow"><img alt="" src="https://miro.medium.com/v2/resize:fit:1400/1*FLXVG7ZMiO2tZJt7UtEylg.png"/></a></p>
<h1 class="relative group flex items-center">
<a class="block pr-1.5 text-lg md:absolute md:p-1.5 md:opacity-0 md:group-hover:opacity-100 md:right-full" href="#maestrale-series" id="maestrale-series" rel="nofollow">
<span class="header-link"><svg aria-hidden="true" class="text-gray-500 hover:text-black dark:hover:text-gray-200 w-4" height="1em" preserveaspectratio="xMidYMid meet" role="img" viewbox="0 0 256 256" width="1em" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span>
</a>
<span>
Maestrale series
</span>
</h1>
<p>A deep dive analysis on Maestrale, that we know well, shows, for example, that continual pre-training tends to erode a small amount of specific knowledge, decreasing MMLU performance, while instilling more language specific knowledge. Instead the next releases based on SFT and KTO tends to increase specific knowledge on MMLU subjects demonstrating the quality of datasets for this particular set of tasks.</p>
<p><a href="https://miro.medium.com/v2/resize:fit:1400/1*YB7Rdf8scXisWNsMvl_Mbg.png" rel="nofollow"><img alt="" src="https://miro.medium.com/v2/resize:fit:1400/1*YB7Rdf8scXisWNsMvl_Mbg.png"/></a></p>
<h1 class="relative group flex items-center">
<a class="block pr-1.5 text-lg md:absolute md:p-1.5 md:opacity-0 md:group-hover:opacity-100 md:right-full" href="#on-evals-and-benchmark" id="on-evals-and-benchmark" rel="nofollow">
<span class="header-link"><svg aria-hidden="true" class="text-gray-500 hover:text-black dark:hover:text-gray-200 w-4" height="1em" preserveaspectratio="xMidYMid meet" role="img" viewbox="0 0 256 256" width="1em" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span>
</a>
<span>
On evals and benchmark
</span>
</h1>
<p>Evaluations and benchmarks tend to test only a particular or a small set of abilities of a model and they are subject to cultural knowledge and bias. This assumes that they can be only indicators of the model performance but cannot be used to judge a model for a particular use case. I suspect that we will see a fast domain specialisation of models and moreover a domain specialisation of evaluation and benchmarks. For example MMLU family of benchmarks test the knowledge on subjects evaluating how much knowledge has been able to be absorbed from a huge amount of text.</p>
<p>But that implies that if in the model training datasets such knowledge is not present or is not well represented your model will not perform very well, This is the reason why foundational models like <a href="https://huggingface.co/collections/sapienzanlp/minerva-llms-661e6011828fe67de4fe7961">sapienzanlp/minerva-llms</a> and <a href="https://huggingface.co/iGeniusAI/Italia-9B-Instruct-v0.1">i</a><a href="https://huggingface.co/iGeniusAI/Italia-9B-Instruct-v0.1">iGeniusAI/Italia-9B-Instruct-v0.1</a> trained mainly on Italian data do not perform so well in MMLU challenges. I’m a big fan of foundational models and we think that there will be space also for specific foundational SLM. Something we are cooking about.</p>
<h1 class="relative group flex items-center">
<a class="block pr-1.5 text-lg md:absolute md:p-1.5 md:opacity-0 md:group-hover:opacity-100 md:right-full" href="#igenius-italia-vs-minerva" id="igenius-italia-vs-minerva" rel="nofollow">
<span class="header-link"><svg aria-hidden="true" class="text-gray-500 hover:text-black dark:hover:text-gray-200 w-4" height="1em" preserveaspectratio="xMidYMid meet" role="img" viewbox="0 0 256 256" width="1em" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span>
</a>
<span>
Igenius Italia vs Minerva
</span>
</h1>
<p>As in the chart below <a href="https://huggingface.co/iGeniusAI/Italia-9B-Instruct-v0.1">iGeniusAI/Italia-9B-Instruct-v0.1</a> a 9 billion model performs better than <a href="https://huggingface.co/collections/sapienzanlp/minerva-llms-661e6011828fe67de4fe7961">sapienzanlp/minerva-llms</a> but the biggest is only 3 billion parameters. Another curiousity is that the 350 M model is better than the 3B, may be it is something interesting to deepen.</p>
<p><a href="https://miro.medium.com/v2/resize:fit:1400/1*_L1WC-6zxo9jEijAVIlnqg.png" rel="nofollow"><img alt="" src="https://miro.medium.com/v2/resize:fit:1400/1*_L1WC-6zxo9jEijAVIlnqg.png"/></a></p>
<h1 class="relative group flex items-center">
<a class="block pr-1.5 text-lg md:absolute md:p-1.5 md:opacity-0 md:group-hover:opacity-100 md:right-full" href="#conclusions" id="conclusions" rel="nofollow">
<span class="header-link"><svg aria-hidden="true" class="text-gray-500 hover:text-black dark:hover:text-gray-200 w-4" height="1em" preserveaspectratio="xMidYMid meet" role="img" viewbox="0 0 256 256" width="1em" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span>
</a>
<span>
Conclusions
</span>
</h1>
<p>The evaluations and benchmarks fields in LLMs is very dynamic and changing fast, our vision is that in the next months and years will born hyper specialized LLMs and hence, many new benchmarks specific for various domains. We are also ready to release a new Italian based evaluation benchmark. Stay tuned and join our <a href="https://mii-llm.ai/" rel="nofollow">community research lab</a>.</p>
|
Fine-tuning Mistral on Your Dataset | https://hf.co/blog/nroggendorff/finetune-mistral | Step 8: The cursed child |
Fine Tuning TinyLlama for Text Generation with TRL | https://hf.co/blog/nroggendorff/finetune-tinyllama | 8. Pushing the Trained Model to Hugging Face Hub |
Ghost 8B Beta Released: Game-Changing Language Model | https://hf.co/blog/lamhieu/ghost-8b-beta-released-game-changing-language-mode | Links |
End of preview. Expand
in Dataset Viewer.
Created by the following code:
!pip install -Uq datasets
import requests
from bs4 import BeautifulSoup, Comment
import pandas as pd
from datasets import Dataset
def get_content(url):
response = requests.get(url)
if response.status_code == 200:
soup = BeautifulSoup(response.text, 'html.parser')
return soup
url = "https://huggingface.co/blog/community"
soup = get_content(url)
articles = soup.find_all("article")
titles = [article.h4.text for article in articles]
links = [f'https://hf.co{article.find("a", class_="block px-3 py-2 cursor-pointer").get("href")}' for article in articles]
def get_article(soup):
# Find all comments in the document
comments = soup.find_all(string=lambda text: isinstance(text, Comment))
# Initialize variables to store the start and end comments
start_comment = None
end_comment = None
# Identify the start and end comments
for comment in comments:
comment_text = comment.strip()
if comment_text == 'HTML_TAG_START':
start_comment = comment
elif comment_text == 'HTML_TAG_END':
end_comment = comment
# Check if both comments were found
if start_comment and end_comment:
# Collect all elements between the start and end comments
contents = []
current = start_comment.next_sibling
while current and current != end_comment:
contents.append(current)
current = current.next_sibling
# Convert the contents to a string
between_content = ''.join(str(item) for item in contents)
# Output the extracted content
return between_content
else:
return "Start or end comment not found."
article_soups = [get_content(link) for link in links]
articles = [get_article(article_soup) for article_soup in article_soups]
# Assuming titles, links, articles are your lists
df = pd.DataFrame({
'title': titles,
'link': links,
'article': articles
})
# Create a Hugging Face Dataset object
dataset = Dataset.from_pandas(df)
# Push the dataset to the Hugging Face Hub
dataset.push_to_hub("ariG23498/community-blogs")
- Downloads last month
- 37