|
--- |
|
base_model: STEM-AI-mtl/phi-2-electrical-engineering |
|
datasets: |
|
- STEM-AI-mtl/Electrical-engineering |
|
- garage-bAInd/Open-Platypus |
|
inference: false |
|
language: |
|
- en |
|
license: other |
|
license_link: LICENSE |
|
license_name: stem.ai.mtl |
|
model_creator: mod |
|
model_name: Phi 2 Electrical Engineering |
|
model_type: phi-msft |
|
prompt_template: '{prompt} |
|
' |
|
quantized_by: TheBloke |
|
llamafile coverted by : Shaikat |
|
tags: |
|
- phi-2 |
|
- electrical engineering |
|
- Microsoft |
|
--- |
|
This repo contains llamafile version for Phi 2 Electrical Engineering model. This llamafile is created from gguf format of original model's quantized version. |
|
|
|
# About llamafile |
|
|
|
llamafile is the new framework that collapses all the complexity of (large language models) LLMs down to a single-file executable (called a "llamafile") that runs locally on most computers, with no installation. The first release of llamafile is a product of Mozilla’s innovation group and developed by [Justine Tunney](https://justine.lol/), about llamafile in short as per [introductory post](https://hacks.mozilla.org/2023/11/introducing-llamafile/) |
|
|
|
**llamafile lets you turn large language model (LLM) weights into executables.** |
|
|
|
**Say you have a set of LLM weights in the form of a 4GB file (in the commonly-used GGUF format). With llamafile you can transform that 4GB file into a binary that runs on six OSes without needing to be installed.** |
|
|
|
Basically, llamafile lets anyone distribute and run LLMs with a single file. |
|
|
|
Here in [github](https://github.com/Mozilla-Ocho/llamafile/issues/242#issuecomment-1930700064) you can find how I created llamafile version from gguf version for other model. |
|
|
|
# Model Description |
|
|
|
## [From the original model card ](https://huggingface.co/STEM-AI-mtl/phi-2-electrical-engineering) |
|
|
|
This is the adapters from the LoRa fine-tuning of the phi-2 model from Microsoft. It was trained on the STEM-AI-mtl/Electrical-engineering dataset combined with garage-bAInd/Open-Platypus.** |
|
|
|
**Developed by: STEM.AI** |
|
|
|
Model type: **Q&A and code generation** |
|
|
|
Language(s) (NLP): **English** |
|
|
|
Finetuned from model [optional]: **microsoft/phi-2** |
|
|
|
Direct Use: **Q&A related to electrical engineering, and Kicad software. Creation of Python code in general, and for Kicad's scripting console.** |
|
|
|
**Refer to microsoft/phi-2 model card for recommended prompt format.** |
|
|
|
## GGUF version |
|
|
|
The original model was quantized by the [ Great Mr. Bloke](https://huggingface.co/TheBloke/phi-2-electrical-engineering-GGUF). |
|
|
|
|
|
As per TheBloke's model card the recommended model phi-2-electrical-engineering.Q5_K_M.gguf quantized with Q5_K_M method this gguf file can take maximum 4.50 GB of RAM. |
|
|
|
# How to run |
|
|
|
To run llamafile version in windows just rename the file by adding .exe at the last and run it as you run any other exe file. |
|
|
|
For running in linux run like any binary in linux, just make sure the llamafile is executable by running chmod. |
|
|
|
Thats it. |
|
|
|
# Prompt format |
|
|
|
``` |
|
Instruction: <prompt> (without the <>) |
|
|
|
Response: |
|
``` |
|
|
|
Thank you for using this model. |
|
|
|
|