ALIA-40b-Q4_0-GGUF / README.md
roleplaiapp's picture
Upload README.md with huggingface_hub
3ee512e verified
|
raw
history blame
1.67 kB
metadata
library_name: transformers
pipeline_tag: text-generation
language:
  - bg
  - ca
  - code
  - cs
  - cy
  - da
  - de
  - el
  - en
  - es
  - et
  - eu
  - fi
  - fr
  - ga
  - gl
  - hr
  - hu
  - it
  - lt
  - lv
  - mt
  - nl
  - nn
  - \no
  - oc
  - pl
  - pt
  - ro
  - ru
  - sh
  - sk
  - sl
  - sr
  - sv
  - uk
datasets:
  - oscar-corpus/colossal-oscar-1.0
  - HuggingFaceFW/fineweb-edu
  - joelniklaus/eurlex_resources
  - joelniklaus/legal-mc4
  - projecte-aina/CATalog
  - UFRGS/brwac
  - community-datasets/hrwac
  - danish-foundation-models/danish-gigaword
  - HiTZ/euscrawl
  - PleIAs/French-PD-Newspapers
  - PleIAs/French-PD-Books
  - AI-team-UoA/greek_legal_code
  - HiTZ/latxa-corpus-v1.1
  - allenai/peS2o
  - pile-of-law/pile-of-law
  - PORTULAN/parlamento-pt
  - hoskinson-center/proof-pile
  - togethercomputer/RedPajama-Data-1T
  - bigcode/starcoderdata
  - bjoernp/tagesschau-2018-2023
  - EleutherAI/the_pile_deduplicated
tags:
  - llama-cpp
  - ALIA-40b
  - gguf
  - Q4_0
  - 40b
  - ALIA
  - 4-bit
  - llama-cpp
  - BSC-LT
  - code
  - math
  - chat
  - roleplay
  - text-generation
  - safetensors
  - nlp
  - code
base_model: BSC-LT/ALIA-40b

roleplaiapp/ALIA-40b-Q4_0-GGUF

Repo: roleplaiapp/ALIA-40b-Q4_0-GGUF
Original Model: ALIA-40b Organization: BSC-LT Quantized File: alia-40b-q4_0.gguf Quantization: GGUF Quantization Method: Q4_0
Use Imatrix: False
Split Model: False

Overview

This is an GGUF Q4_0 quantized version of ALIA-40b.

Quantization By

I often have idle A100 GPUs while building/testing and training the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful.

Andrew Webby @ RolePlai