Aritra Roy Gosthipaty

ariG23498

AI & ML interests

Deep Representation Learning

Recent Activity

liked a dataset 11 minutes ago
pixparse/cc3m-wds
liked a Space about 1 hour ago
huggingface-projects/llama-3.2-vision-11B
liked a model about 1 hour ago
Xkev/Llama-3.2V-11B-cot

Articles

Organizations

ariG23498's activity

reacted to thomwolf's post with ๐Ÿ”ฅ about 17 hours ago
posted an update 9 days ago
reacted to m-ric's post with ๐Ÿš€ 28 days ago
view post
Post
1936
๐ŸŒŸ๐ŸŒŽ Cohere releases Aya 8B & 32B: SOTA multilingual models for 23 languages !

How did they manage to beat top contenders while also adding 23 languages?

๐Ÿ”„ ๐—ง๐—ฟ๐—ฎ๐—ถ๐—ป ๐—ผ๐—ป ๐˜€๐˜†๐—ป๐˜๐—ต๐—ฒ๐˜๐—ถ๐—ฐ ๐—ฑ๐—ฎ๐˜๐—ฎ:
โ€ข Synthetic data has been said to cause model-collapse after too much training
โ€ข Cohere has introduced "data arbitrage" to prevent this by strategically sampling from a pool of several teacher models instead of one single teacher
โ€ข First train a model pool for each different groups of languages, and employ an internal Reward Model named "Arbiter" to evaluate and select the optimal generation. Then only the best generation is kept as the final completion for each prompt
โžก๏ธ This process is particularly effective for multilingual setting, where no single teacher model performs in all languages : here "Multilingual Arbitrage" singlehandedly improves win rates of the 8B model vs Gemma-2-9B by 10 points!

๐Ÿงฉ ๐—จ๐˜€๐—ฒ ๐—บ๐—ผ๐—ฑ๐—ฒ๐—น ๐—บ๐—ฒ๐—ฟ๐—ด๐—ถ๐—ป๐—ด: Rather than struggling to find the right mix of data in training a single model for multilingual use, just train language specific models then merge them!
โ€ข Maximize diversity between merged checkpoints by training each on different language families.
โ€ข Experimented fancy techniques (SLERP, TIES, DARE-TIES) but found out weighted averaging to be the most consistent!
โžก๏ธ Merging had 3x more gains at high 35B scale vs the 8B scale - consistent with literature findings that merging is more effective at scale

โšก๏ธ ๐—š๐—ฟ๐—ฒ๐—ฎ๐˜ ๐—ฝ๐—ฒ๐—ฟ๐—ณ๐—ผ๐—ฟ๐—บ๐—ฎ๐—ป๐—ฐ๐—ฒ: Automatic evaluations on Arena-Hard-Auto dataset:
โžก๏ธ Aya Expanse 8B beats models from its weight class such as Gemma 2 9B, Llama 3.1 8B, and the recent Ministral 8B, with win rates ranging from 60.4% to 70.6%
โžก๏ธ Aya Expanse 32B outperforms Gemma 2 27B, Mistral 8x22B, and Llama 3.1 70B (2x its size)
โ€ข โš ๏ธ But this performance eval comes from only one benchmark! Let's wait for Open LLM leaderboard evals;

๐Ÿ”’ CC by NC license

Blog post here: https://huggingface.co/blog/aya-expanse
posted an update 28 days ago
reacted to reach-vb's post with ๐Ÿ”ฅ about 1 month ago
view post
Post
5361
Multimodal Ichigo Llama 3.1 - Real Time Voice AI ๐Ÿ”ฅ

> WhisperSpeech X Llama 3.1 8B
> Trained on 50K hours of speech (7 languages)
> Continually trained on 45hrs 10x A1000s
> MLS -> WhisperVQ tokens -> Llama 3.1
> Instruction tuned on 1.89M samples
> 70% speech, 20% transcription, 10% text
> Apache 2.0 licensed โšก

Architecture:
> WhisperSpeech/ VQ for Semantic Tokens
> Llama 3.1 8B Instruct for Text backbone
> Early fusion (Chameleon)

I'm super bullish on HomeBrew/ Jan and early fusion, audio and text, multimodal models!

(P.S. Play with the demo on Hugging Face: jan-hq/Ichigo-llama3.1-s-instruct)
reacted to merve's post with ๐Ÿค— 3 months ago
view post
Post
2259
amazing leaderboard by @rwightman , compare all the image backbones on various metrics against model performance

below is an example for top-k against inferred samples per second
timm/leaderboard
reacted to joaogante's post with ๐Ÿค— 3 months ago
view post
Post
2785
New sampling strategy dropped in ๐Ÿค— transformers -- Min P sampling ๐Ÿ”ฅ

Are you tired of having top_k arbitrarily discarding high-quality continuations? Or top_p forgetting to exclude low-probability tokens, derailing your generation? Try out the new min_p flag in generate, fresh from a PR merged today! ๐Ÿฅฌ

Min P consists of a dynamic token filter -- as opposed to Top K, which keeps the K most likely tokens, and Top P, which keeps the most likely tokens up to a fixed cumulative probability, both static filters. Min P takes a base probability (defined in the min_p flag) and multiplies it by the probability of the most likely token in the distribution for the next token. All tokens less likely than the resulting value are filtered. What happens with this strategy?
๐Ÿ‘‰ High probability token present -> aggressive filter (we don't want to miss on that high-probability case and risk derailing generation)
๐Ÿ‘‰ No high probability token present -> relaxed filter (there are many continuation possibilities that the model finds plausible)

You should set min_p to a low value, between 0.05 and 0.1. It behaves particularly well for creative text generation when paired up with temperature > 1.

Kudos to @kalomaze and @menhguin for creating this technique ๐Ÿ”ฅ Read their discussion in the original issue for benchmarks (https://github.com/huggingface/transformers/issues/27670)

Copy-pasteable version of the example in the image below here: https://pastebin.com/VqXNtuxd

Have fun experimenting! ๐Ÿ˜Ž
reacted to merve's post with ๐Ÿ˜Ž 3 months ago
posted an update 3 months ago
reacted to rishiraj's post with ๐Ÿค— 11 months ago
view post
Post
Hugging Face ๐Ÿค—
ยท