File size: 1,282 Bytes
106db30
 
 
4a283dc
106db30
 
4a283dc
106db30
 
 
 
 
 
 
 
 
 
 
4a283dc
 
106db30
 
 
 
4a283dc
 
106db30
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
<!DOCTYPE html>
<html>
  <head>
    <title>functionary-7b-v1-GGUF (Q5_K)</title>
  </head>
  <body>
    <h1>functionary-7b-v1-GGUF (Q5_K)</h1>
    <p>
      With the utilization of the
      <a href="https://github.com/abetlen/llama-cpp-python">llama-cpp-python</a>
      package, we are excited to introduce the GGUF model hosted in the Hugging
      Face Docker Spaces, made accessible through an OpenAI-compatible API. This
      space includes comprehensive API documentation to facilitate seamless
      integration.
    </p>
    <ul>
      <li>
        The API endpoint:
        <a href="https://limcheekin-functionary-7b-v1-gguf.hf.space/v1"
          >https://limcheekin-functionary-7b-v1-gguf.hf.space/v1</a
        >
      </li>
      <li>
        The API doc:
        <a href="https://limcheekin-functionary-7b-v1-gguf.hf.space/docs"
          >https://limcheekin-functionary-7b-v1-gguf.hf.space/docs</a
        >
      </li>
    </ul>
    <p>
      If you find this resource valuable, your support in the form of starring
      the space would be greatly appreciated. Your engagement plays a vital role
      in furthering the application for a community GPU grant, ultimately
      enhancing the capabilities and accessibility of this space.
    </p>
  </body>
</html>