John Smith PRO

John6666

AI & ML interests

None yet

Recent Activity

updated a collection about 1 hour ago
LoRAs / Models (SDXL1.0, Pony, SD1.5, Flux, ...)
liked a model about 1 hour ago
Comfy-Org/sigclip_vision_384
New activity about 1 hour ago
John6666/flux-lora-the-explorer

Organizations

John6666's activity

reacted to huzaifas-sidhpurwala's post with 👀 about 1 hour ago
view post
Post
255
As AI models become more widespread, it is essential to address their potential risks and vulnerabilities. Open-source AI is poised to be a driving force behind tomorrow's innovations in this field. This paper examines the current landscape of security and safety in open-source AI models and outlines concrete measures to monitor and mitigate associated risks effectively.

Building Trust: Foundations of Security, Safety and Transparency in AI (2411.12275)






reacted to Symbol-LLM's post with 🚀 about 1 hour ago
view post
Post
242
🥳 Thrilled to introduce our recent efforts on bootstrapping VLMs for multi-modal chain-of-thought reasoning !

📕 Title: Vision-Language Models Can Self-Improve Reasoning via Reflection

🔗 Link: Vision-Language Models Can Self-Improve Reasoning via Reflection (2411.00855)

😇Takeaways:

- We found that VLMs can self-improve reasoning performance through a reflection mechanism, and importantly, this approach can scale through test-time computing.

- Evaluation on comprehensive and diverse Vision-Language reasoning tasks are included !
reacted to davidberenstein1957's post with 👀 about 1 hour ago
view post
Post
296
🤗🔭 Introducing Observers: A Lightweight SDK for AI Observability 🔭🤗

Observers is an open-source Python SDK that provides comprehensive observability for AI applications. Our library makes it easy to:

- Track and record interactions with AI models
- Store observations in multiple backends
- Query and analyse your AI interactions with ease

https://huggingface.co/blog/davidberenstein1957/observers-a-lightweight-sdk-for-ai-observability
reacted to SaylorTwift's post with 👀 about 1 hour ago
reacted to elliesleightholm's post with 🔥🚀🤗 about 1 hour ago
reacted to davidberenstein1957's post with 👀 about 1 hour ago
reacted to hbseong's post with 👀 about 1 hour ago
view post
Post
141
🚨🔥 New Release Alert! 🔥🚨

Introducing the 435M model that outperforms Llama-Guard-3-8B while slashing 75% of the computation cost! 💻💥
👉 Check it out: hbseong/HarmAug-Guard (Yes, INFERENCE CODE INCLUDED! 💡)

More details in our paper: https://arxiv.org/abs/2410.01524 📜

#HarmAug #LLM # Safety #EfficiencyBoost #Research #AI #MachineLearning
reacted to m-ric's post with 🚀 about 2 hours ago
view post
Post
58
Lifehack of the day:
Adding "r.jina.ai/" before any url transforms it in Markdown using Jina AI's Reader! Here with @cyrilzakka 's blog post.
reacted to takarajordan's post with 👍 about 2 hours ago
view post
Post
75
First post here goes!

takarajordan/CineDiffusion

Super excited to announce CineDiffusion🎥, it creates images up to 4.2 Megapixels in Cinematic ultrawide formats like:
- 2.39:1 (Modern Widescreen)
- 2.76:1 (Ultra Panavision 70)
- 3.00:1 (Experimental Ultra-wide)
- 4.00:1 (Polyvision)
- 2.55:1 (CinemaScope)
- 2.20:1 (Todd-AO)

More to come soon!!

Thanks to @John6666 and @Resoldjew for your early support <3

And thanks to the team at ShuttleAI for their brand new Shuttle-3 model, what an amazing job.

shuttleai/shuttle-3-diffusion
reacted to singhsidhukuldeep's post with 👀 about 11 hours ago
view post
Post
395
It's always exciting to revisit Google's DCN paper—impractical but good!

Deep & Cross Network (DCN) - a groundbreaking approach to click-through rate prediction that's revolutionizing digital advertising!

Key Innovation:
DCN introduces a novel cross-network architecture that automatically learns feature interactions without manual engineering. What sets it apart is its ability to explicitly model bounded-degree feature crossings while maintaining the power of deep neural networks.

Technical Deep Dive:
- The architecture combines a cross network with a deep network in parallel.
- The cross network performs automatic feature crossing at each layer.
- The embedding layer transforms sparse categorical features into dense vectors.
- Cross layers use a unique formula that enables efficient high-degree polynomial feature interactions.
- Memory-efficient design with linear complexity O(d) in the input dimension.

Performance Highlights:
- Outperforms traditional DNN models with 60% less memory usage.
- Achieved 0.4419 logloss on the Criteo Display Ads dataset.
- Consistently performs better than state-of-the-art models like Deep Crossing and Factorization Machines.
- Exceptional performance on non-CTR tasks like Forest Covertype (97.40% accuracy).

Under the Hood:
- Uses embedding vectors of dimension 6 × (category cardinality)^1/4.
- Implements batch normalization and the Adam optimizer.
- The cross network depth determines the highest polynomial degree of feature interactions.
- An efficient projection mechanism reduces cubic computational cost to linear.
- Parameter sharing enables better generalization to unseen feature interactions.

Key Advantages:
1. No manual feature engineering required.
2. Explicit feature crossing at each layer.
3. Highly memory-efficient.
4. Scalable to web-scale data.
5. Robust performance across different domains.

Thoughts on how this could transform digital advertising?
reacted to m-ric's post with 👀 about 12 hours ago
view post
Post
472
🔍 Meta teams use a fine-tuned Llama model to fix production issues in seconds

One of Meta's engineering teams shared how they use a fine-tuned small Llama (Llama-2-7B, so not even a very recent model) to identify the root cause of production issues with 42% accuracy.

🤔 42%, is that not too low?
➡️ Usually, whenever there's an issue in production, engineers dive into recent code changes to find the offending commit. At Meta's scale (thousands of daily changes), this is like finding a needle in a haystack.
💡 So when the LLM-based suggestion is right, it cuts incident resolution time from hours to seconds!

How did they do it?

🔄 Two-step approach:
‣ Heuristics (code ownership, directory structure, runtime graphs) reduce thousands of potential changes to a manageable set
‣ Fine-tuned Llama 2 7B ranks the most likely culprits

🎓 Training pipeline:
‣ Continued pre-training on Meta's internal docs and wikis
‣ Supervised fine-tuning on past incident investigations
‣ Training data mimicked real-world constraints (2-20 potential changes per incident)

🔮 Now future developments await:
‣ Language models could handle more of the incident response workflow (runbooks, mitigation, post-mortems)
‣ Improvements in model reasoning should boost accuracy further

Read it in full 👉 https://www.tryparity.com/blog/how-meta-uses-llms-to-improve-incident-response
reacted to prithivMLmods's post with 👀 about 12 hours ago
view post
Post
1147
🍅 Glif App's Remixes feature allows you to slap a logo onto anything, seamlessly integrating the input image (logo) into various contexts. The result is stunning remixes that blend the input logo with generated images (img2img logo mapping) for incredible outcomes.

Check out Any Logo Anywhere remixes on Glif: [Glif Remixes](https://glif.app/glifs/cm3o7dfsd002610z48sz89yih/remixes)

🌐The browser extension enables thousands of Glif-based img2img workflows on any image you find online. Experience Glif Remix with WebAI: [Chrome Extension](https://chromewebstore.google.com/detail/glif-remix-the-web-with-a/abfbooehhdjcgmbmcpkcebcmpfnlingo)

.
.
.
🤗Have fun with the cool stuff !!
@prithivMLmods
reacted to etemiz's post with 👀 about 12 hours ago
view post
Post
626
if I host in hf spaces, can I interact with the app using an API?
  • 1 reply
·
reacted to vilarin's post with 🔥 about 12 hours ago
view post
Post
586
🏄‍♂️While browsing new models, I stumbled upon Lumiere from aixonlab. After testing it, I feel it has considerable potential. Keep up the good work!

Lumiere Alpha is a model focusing on improving realism without compromising prompt coherency or changing the composition completely from the original Flux.1-Dev model.

🦄 Model: aixonlab/flux.1-lumiere-alpha

🦖 Demo: vilarin/lumiere
  • 1 reply
·
reacted to jjokah's post with 👍 about 12 hours ago
view post
Post
418
Google's revamped Machine Learning Crash Course covers the recent advances in AI, with an increased focus on interactive learning.

📝 100+ exercises
🗂 12 modules
🕒 15 hours
📹 Video explainers of ML concepts
🌎 Real-world examples
📊 Interactive visualizations

Ref:
https://developers.google.com/machine-learning/crash-course
reacted to jsulz's post with 🔥 about 12 hours ago
view post
Post
1157
When the XetHub crew joined Hugging Face this fall, @erinys and I started brainstorming how to share our work to replace Git LFS on the Hub. Uploading and downloading large models and datasets takes precious time. That’s where our chunk-based approach comes in.

Instead of versioning files (like Git and Git LFS), we version variable-sized chunks of data. For the Hugging Face community, this means:

⏩ Only upload the chunks that changed.
🚀 Download just the updates, not the whole file.
🧠 We store your file as deduplicated chunks

In our benchmarks, we found that using CDC to store iterative model and dataset version led to transfer speedups of ~2x, but this isn’t just a performance boost. It’s a rethinking of how we manage models and datasets on the Hub.

We're planning on our new storage backend to the Hub in early 2025 - check out our blog to dive deeper, and let us know: how could this improve your workflows?

https://huggingface.co/blog/from-files-to-chunks
reacted to mgubri's post with 🔥 about 12 hours ago
view post
Post
460
🎉 We’re excited to announce, in collaboration with @kaleidophon , the release of the models from our Apricot 🍑 paper, "Apricot: Calibrating Large Language Models Using Their Generations Only," accepted at ACL 2024! Reproducibility is essential in science, and we've worked hard to make it as seamless as possible.
parameterlab/apricot-models-673d2cae40b6ff437a86f0bf
reacted to fdaudens's post with 👀 about 12 hours ago
view post
Post
496
🚀 DeepSeek just dropped DeepSeek-R1-Lite-Preview with “reasoning” capacity.

- Matches OpenAI o1-preview on AIME & MATH benchmarks.
- Transparent process output
- Open-source model to be released

Try it out: https://chat.deepseek.com/