Join the conversation
Join the community of Machine Learners and AI enthusiasts.
Sign Upthe Mistral API? the model name is probably diffrent. I used mistral-large-2 but had to use the name mistral-large-latest. The team will help you via chat.
Thank you for your response. I used the model "Mistral-7B-Instruct-v0.2" five months ago, and I still see it listed among the models on Hugging Face. However, I'm not sure why I received this message:
[https://beamdata-workspace.slack.com/archives/D072JQ00YLA/p1726501785137449](HfHubHTTPError: 429 Client Error: Too Many Requests for URL: https://api-inference.huggingface.co/models/mistralai/Mistral-7B-Instruct-v0.2 (Request ID: 9JPVxkP75lo***WNibrD). Rate limit reached. You have reached the PRO hourly usage limit. Use Inference Endpoints (dedicated) to scale your endpoint.)
https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2
It's a gated model, so it could be free for use from your own program if you provide an email address, which is not possible in Pro if you don't provide one...
If it can be used for HuggingChat or something else, you would be correct, as I think it was for Pro only.
https://huggingface.co/chat/
If you want to use a GGUFed version from the Pro-only Zero GPU space, you can use the space below or duplicate and modify it for your own use.
https://huggingface.co/spaces/CaioXapelaum/GGUF-Playground
https://huggingface.co/spaces/John6666/text2tag-llm
Thanks, but my email has been associated with the Pro account in the settings. Where should I apply for the scaling, or is there something else I need to do?