-
-
-
-
-
-
Inference Providers
Active filters:
vllm
RedHatAI/Meta-Llama-3.1-8B-Instruct-FP8
Text Generation
•
8B
•
Updated
•
175k
•
42
RedHatAI/Meta-Llama-3.1-8B-Instruct-FP8-dynamic
Text Generation
•
8B
•
Updated
•
39.7k
•
5
RedHatAI/Meta-Llama-3.1-70B-Instruct-FP8-dynamic
Text Generation
•
71B
•
Updated
•
1.43k
•
7
RedHatAI/Meta-Llama-3.1-70B-Instruct-FP8
Text Generation
•
71B
•
Updated
•
6.36k
•
50
RedHatAI/Meta-Llama-3.1-405B-Instruct-FP8
Text Generation
•
406B
•
Updated
•
2.16k
•
31
RedHatAI/Meta-Llama-3.1-405B-Instruct-FP8-dynamic
Text Generation
•
406B
•
Updated
•
5.36k
•
15
RedHatAI/Meta-Llama-3.1-8B-Instruct-quantized.w8a16
Text Generation
•
3B
•
Updated
•
2.19k
•
10
RedHatAI/Meta-Llama-3.1-8B-Instruct-quantized.w8a8
Text Generation
•
8B
•
Updated
•
78.9k
•
17
mgoin/Nemotron-4-340B-Base-hf
Text Generation
•
341B
•
Updated
•
3
•
1
mgoin/Nemotron-4-340B-Base-hf-FP8
Text Generation
•
341B
•
Updated
•
94
•
2
RedHatAI/Meta-Llama-3.1-70B-Instruct-quantized.w8a16
Text Generation
•
19B
•
Updated
•
624
•
5
mgoin/Nemotron-4-340B-Instruct-hf
Text Generation
•
341B
•
Updated
•
7
•
4
mgoin/Nemotron-4-340B-Instruct-hf-FP8
Text Generation
•
341B
•
Updated
•
60
•
3
FlorianJc/ghost-8b-beta-vllm-fp8
Text Generation
•
8B
•
Updated
•
5
FlorianJc/Meta-Llama-3.1-8B-Instruct-vllm-fp8
Text Generation
•
8B
•
Updated
•
9
RedHatAI/Meta-Llama-3.1-70B-Instruct-quantized.w8a8
Text Generation
•
71B
•
Updated
•
18k
•
21
RedHatAI/Meta-Llama-3.1-8B-FP8
Text Generation
•
8B
•
Updated
•
7.29k
•
8
RedHatAI/Meta-Llama-3.1-70B-FP8
Text Generation
•
71B
•
Updated
•
1.3k
•
2
RedHatAI/Meta-Llama-3.1-8B-quantized.w8a16
Text Generation
•
3B
•
Updated
•
9
•
1
RedHatAI/Meta-Llama-3.1-8B-quantized.w8a8
Text Generation
•
8B
•
Updated
•
1.53k
•
4
RedHatAI/Meta-Llama-3.1-70B-Instruct-quantized.w4a16
Text Generation
•
11B
•
Updated
•
2.55k
•
32
RedHatAI/starcoder2-15b-FP8
Text Generation
•
16B
•
Updated
•
18
RedHatAI/starcoder2-7b-FP8
Text Generation
•
7B
•
Updated
•
19
RedHatAI/starcoder2-3b-FP8
Text Generation
•
3B
•
Updated
•
18
RedHatAI/Meta-Llama-3.1-405B-FP8
Text Generation
•
410B
•
Updated
•
33
bprice9/Palmyra-Medical-70B-FP8
Text Generation
•
71B
•
Updated
•
8
•
1
RedHatAI/gemma-2-2b-it-FP8
3B
•
Updated
•
1.24k
•
1
RedHatAI/Meta-Llama-3.1-405B-Instruct-quantized.w4a16
Text Generation
•
58B
•
Updated
•
124
•
12
RedHatAI/gemma-2-9b-it-quantized.w8a16
Text Generation
•
4B
•
Updated
•
1.15k
•
1
RedHatAI/gemma-2-2b-it-quantized.w8a16
Text Generation
•
2B
•
Updated
•
27
•
1