Edit model card

L3-MS-Astoria-70b

Now that the cute anime girl has your attention.

Creator: SteelSkull

About L3-MS-Astoria-70b:

L3 = Llama-3

MS = Model Stock

This is my first foray into 70b models, so this is more or less an experiment, please let me know your thoughts on the model and where their can be improvements.
L3-MS-Astoria-70b combines the strengths of multiple models to deliver a well-rounded, capable assistant. It is aimed at performing general tasks, storytelling, roleplay, and more mature content.
The model stock merging method attempts to make the model remain focused, tailored, and high-quality.

Quants:

(Thanks to @Mradermacher!, please send them likes and follows!)

L3-MS-Astoria-70b-GGUF (GGUFs)

Config:

MODEL_NAME = "L3-MS-Astoria-70b"
yaml_config = """
base_model: failspy/llama-3-70B-Instruct-abliterated  
merge_method: model_stock
dtype: bfloat16
models:
  - model: migtissera/Tess-2.0-Llama-3-70B-v0.2
  - model: abacusai/Llama-3-Giraffe-70B
  - model: NeverSleep/Llama-3-Lumimaid-70B-v0.1-alt
"""

Source Model Details:

migtissera/Tess-2.0-Llama-3-70B-v0.2:
Tess, short for Tesoro (Treasure in Italian), is a general purpose Large Language Model series. Tess-2.0-Llama-3-70B-v0.2 was trained on the meta-llama/Meta-Llama-3-70B base. The change between v0.1 and this version, v0.2 is that v0.2 has undergone an additional step of uncensoring.

abacusai/Llama-3-Giraffe-70B:
General model trained on 1b tokens, up to 128k ctx

NeverSleep/Llama-3-Lumimaid-70B-v0.1-alt:
Llama3 trained on our RP datasets, NeverSleep tried to have a balance between the ERP and the RP, not too horny, but just enough.
NeverSleep also added some non-RP dataset, making the model less dumb overall. It should look like a 40%/60% ratio for Non-RP/RP+ERP data.

Base model failspy/llama-3-70B-Instruct-abliterated:
This is meta-llama/Llama-3-70B-Instruct with orthogonalized bfloat16 safetensor weights, generated with the methodology that was described in the preview paper/blog post: 'Refusal in LLMs is mediated by a single direction' which I encourage you to read to understand more.
TL;DR: this model has had certain weights manipulated to "inhibit" the model's ability to express refusal. It is not in anyway _guaranteed_ that it won't refuse you, understand your request, it may still lecture you about ethics/safety, etc. It is tuned in all other respects the same as the original 70B instruct model was, just with the strongest refusal direction orthogonalized out.

I've had a few people ask about donations so here's a link:

Downloads last month
2,705
Safetensors
Model size
70.6B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Steelskull/L3-MS-Astoria-70b

Collection including Steelskull/L3-MS-Astoria-70b