kwabs22
Working but added inference times
e3894fb
|
raw
history blame
371 Bytes
metadata
title: FrontEndasPromptEngineeringTest
emoji: 💻
colorFrom: blue
colorTo: purple
sdk: docker
pinned: false
models:
  - stabilityai/stablelm-2-zephyr-1_6b

Example of running llama.cpp (and by extension simple cpp) from python without pip package dependency issues

Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference