Bot

inflatebot

AI & ML interests

"Potentially one of my biggest flaws is that I genuinely think that the science appreciates when you commit to a bit." - Tom ExtractionsAndIre

Organizations

inflatebot's activity

posted an update 11 days ago
view post
Post
2451
Alfitaria/Q25-1.5B-VeoLu
Q2.5-1.5-VeoLu is a 1.5 billion parameter General Purpose Creative model trained on Qwen2.5-1.5B-Instruct. Intended mostly as an educational process for myself, Veo Lu nevertheless manages to be usable most of the time, while also being light enough to potentially run on a smartphone.
posted an update 25 days ago
posted an update about 1 month ago
view post
Post
1207
THANK YOU for bringing Mag Mell to 10,000 downloads across its quantizations!! I'm over the moon with how well it's done, and with everyone's kind feedback.

I'm in a team now! Allura are a group of alumni from various reaches of the LLM roleplay scene.
https://huggingface.co/allura-org

Our first model is an OLMoE roleplay tune called MoE Girl:
allura-org/MoE-Girl-1BA-7BT

I'd like to make more adventuring and longform models in my current style with them, so keep an eye out for that.

Also Mag Mell R2 soon maybe idk
posted an update about 2 months ago
view post
Post
3028
!!SEE UPDATE BELOW!!
I don't know who still needs to hear this, but if you're using Mistral Nemo-based models, you might have been using the wrong completions format. This is a signal boost from MarinaraSpaghetti's model card for NemoMix-Unleashed: MarinaraSpaghetti/NemoMix-Unleashed-12B
A lot of people have been working with a version of Nemo that's been reconfigured for ChatML, and while that works great, simply using the right format might be just as effective at correcting weirdness people in the AIRP scene sometimes have with Nemo.

Huge ups to Marinara for pointing this out, and to the MistralAI team member who let her know.

Update: A PR has been merged to SillyTavern Staging with new corrected templates! If you don't want to switch or wait, I put them up on GitHub: https://github.com/inflatebot/SillyTavern-Mistral-Templates

PRs for KoboldCPP's chat adapters and KoboldAI Lite *have been merged* and are coming in their respective releases (probably the next time KoboldCPP updates -- it didn't make it for 1.75.1, but you could just grab 'em from the repo!)
  • 1 reply
ยท
reacted to tomaarsen's post with ๐Ÿคฏ๐Ÿ”ฅ about 2 months ago
view post
Post
1976
I've just shipped the Sentence Transformers v3.1.1 patch release, fixing the hard negatives mining utility for some models. This utility is extremely useful to get more performance out of your embedding training data.

โ› Hard negatives are texts that are rather similar to some anchor text (e.g. a query), but are not the correct match. They're difficult for a model to distinguish from the correct answer, often resulting in a stronger model after training.
mine_hard_negatives docs: https://sbert.net/docs/package_reference/util.html#sentence_transformers.util.mine_hard_negatives

๐Ÿ”“ Beyond that, this release removes the numpy<2 restriction from v3.1.0. This was previously required for Windows as not all third-party libraries were updated to support numpy v2. With Sentence Transformers, you can now choose v1 or v2 of numpy.

Check out the full release notes here: https://github.com/UKPLab/sentence-transformers/releases/tag/v3.1.1

I'm looking forward to releasing v3.2, I have some exciting things planned ๐Ÿš€
posted an update 2 months ago
view post
Post
720
inflatebot/MN-12B-Mag-Mell-R1
MN-12B-Mag-Mell is a multi-stage merge, inspired by hypermerges like Tiefighter and Umbral Mind, intended for use as a general-purpose "Best of Nemo" model for co-writing, roleplay, and text adventures.

Consistently, Mag Mell produced prose that shocked testers, with a minimum of "slop". It also exhibited a unique sense of humor, and a propensity for inserting bespoke details into adventuring scenarios.
  • 1 reply
ยท
replied to their post 2 months ago
posted an update 2 months ago
view post
Post
505
Anybody ever play Final Fantasy: Crystal Chronicles?
Like, *really* play it?

Mag Mell has been in my head recently. What a place that was.

Those cocoons looked like I could lay down inside of one, and it would be the most powerful sleep of a lifetime, with dreams that would last one thousand years, and I'd wake up with the wisdom of generations.

...Hey, anybody like text adventures?
  • 1 reply
ยท
reacted to m-ric's post with ๐Ÿคฏ๐Ÿ”ฅโž• 3 months ago
view post
Post
3379
๐—š๐—ผ๐—ผ๐—ด๐—น๐—ฒ ๐—ฝ๐—ฎ๐—ฝ๐—ฒ๐—ฟ : ๐˜€๐—ฐ๐—ฎ๐—น๐—ถ๐—ป๐—ด ๐˜‚๐—ฝ ๐—ถ๐—ป๐—ณ๐—ฒ๐—ฟ๐—ฒ๐—ป๐—ฐ๐—ฒ ๐—ฐ๐—ผ๐—บ๐—ฝ๐˜‚๐˜๐—ฒ ๐—ฏ๐—ฒ๐—ฎ๐˜๐˜€ ๐Ÿญ๐Ÿฐ๐˜… ๐—น๐—ฎ๐—ฟ๐—ด๐—ฒ๐—ฟ ๐—บ๐—ผ๐—ฑ๐—ฒ๐—น๐˜€ ๐Ÿš€

Remember scaling laws? These are empirical laws that say "the bigger your model, the better it gets". More precisely, "as your compute increases exponentially, loss decreases in a linear fashion". They have wild implications, suggesting that spending 100x more training compute would make you super-LLMs. That's why companies are racing to build the biggest AI superclusters ever, and Meta bought 350k H100 GPUs, which probably cost in the order of $1B.

But think of this : we're building huge reasoning machines, but only ask them to do one pass through the model to get one token of the final answer : i.e., we expend a minimal effort on inference. That's like building a Caterpillar truck and making it run on a lawnmower's motor. ๐Ÿšš๐Ÿ›ต Couldn't we optimize this? ๐Ÿค”

๐Ÿ’ก So instead of scaling up on training by training even bigger models on many more trillions of tokens, Google researchers explored this under-explored avenue : scaling up inference compute.

They combine two methods to use more compute : either a reviser that iterated to adapt the model distribution, or generate N different completions (for instance through Beam Search) and select only the best one using an additional verifier model.

They use a Palm-2 model (released in May 23) on the MATH dataset : Palm-2 has the advantage of getting a low performance on MATH, but not zero, so that improvements will be noticeable.

And the results show that for the same fixed amount of inference compute:
๐Ÿ’ฅ a smaller model with more effort on decoding beats a x14 bigger model using naive greedy sampling.

That means that you can divide your training costs by 14 and still get the same perf for the same inference cost!

Take that, scaling laws. Mark Zuckerberg, you're welcome, hope I can get some of these H100s.

Read the paper here ๐Ÿ‘‰ Scaling LLM Test-Time Compute Optimally can be More Effective than Scaling Model Parameters (2408.03314)
  • 1 reply
ยท
reacted to grimjim's post with ๐Ÿง  3 months ago
view post
Post
4174
I've come across theoretical justification for my prior experimentation with extremely low-weight mergers: they amount to flattening a model so its "massive activation" features remain as significant contributors. Extremely low-weight merge weights also effectively sparsify a contributing model with regard to the base model, but in a way which still preserves relationships within the flattened latent space. In the paper "Massive Activations in Large Language Models", the authors observed "very few activations exhibit significantly larger values than others (e.g., 100,000 times larger)", which in turn implies a lower bound in effective application of extremely low weight merging.
https://arxiv.org/abs/2402.17762
  • 1 reply
ยท