Iv

CoolSpot
Β·

AI & ML interests

None yet

Recent Activity

liked a Space about 1 month ago
not-lain/background-removal
liked a model about 1 month ago
NotSharpe/KamalaHarris
liked a Space about 1 month ago
jbilcke-hf/FacePoke
View all activity

Organizations

None yet

CoolSpot's activity

Reacted to merve's post with πŸ‘ about 1 month ago
view post
Post
3742
Meta AI vision has been cooking @facebook
They shipped multiple models and demos for their papers at @ECCV πŸ€—

Here's a compilation of my top picks:
- Sapiens is family of foundation models for human-centric depth estimation, segmentation and more, all models have open weights and demos πŸ‘

All models have their demos and even torchscript checkpoints!
A collection of models and demos: facebook/sapiens-66d22047daa6402d565cb2fc
- VFusion3D is state-of-the-art consistent 3D generation model from images

Model: facebook/vfusion3d
Demo: facebook/VFusion3D

- CoTracker is the state-of-the-art point (pixel) tracking model

Demo: facebook/cotracker
Model: facebook/cotracker
Reacted to singhsidhukuldeep's post with πŸ‘ 2 months ago
view post
Post
1146
1 hour of OpenAi o1, here are my thoughts...

Here are my few observations:

- Slower response times: o1 can take over 10+ seconds to answer some questions, as it spends more time "thinking" through problems. In my case, it took over 50 seconds.

- Less likely to admit ignorance: The models are reported to be less likely to admit when they don't know the answer to a question.

- Higher pricing: o1-preview is significantly more expensive than GPT-4o, costing 3x more for input tokens and 4x more for output tokens in the API. With more thinking and more tokens, this could require houses to be mortgaged!

- Do we need this?: While it's better than GPT-4o for complex reasoning, on many common business tasks, its performance is just equivalent.

- Not a big deal: No comparisons to Anthropic or Google DeepMind Gemini are mentioned or included.

- This model tries to think and iterate over the response on its own! Think of it as an inbuilt CoT on steroids! Would love a technical review paper on the training process.

A must-read paper: https://cdn.openai.com/o1-system-card.pdf
Reacted to Jaward's post with πŸ”₯ 6 months ago
view post
Post
1562
Very Insightful Read!!!
A RAG framework entirely inspired by natural intelligence - modeled after hippocampal indexing theory of human long-term memory(which suggests the hippocampus links and retrieves memory details stored in the cortex)

It outperforms current β€œcheat” RAG:)
This is how we achieve human-level intelligence, by modeling natural intelligence correctly!

Paper: https://arxiv.org/abs/2405.14831
  • 1 reply
Β·