YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

Nidum-Limitless-Gemma-2B - GGUF

Original model description:

license: apache-2.0 tags: - legal - chemistry - medical - text-generation-inference - art - finance pipeline_tag: text-generation

Nidum-Limitless-Gemma-2B LLM

Welcome to the repository for Nidum-Limitless-Gemma-2B, an advanced language model that provides unrestricted and versatile responses across a wide range of topics. Unlike conventional models, Nidum-Limitless-Gemma-2B is designed to handle any type of question and deliver comprehensive answers without content restrictions.

Key Features:

  • Unrestricted Responses: Address any query with detailed, unrestricted responses, providing a broad spectrum of information and insights.
  • Versatility: Capable of engaging with a diverse range of topics, from complex scientific questions to casual conversation.
  • Advanced Understanding: Leverages a vast knowledge base to deliver contextually relevant and accurate outputs across various domains.
  • Customizability: Adaptable to specific user needs and preferences for different types of interactions.

Use Cases:

  • Open-Ended Q&A
  • Creative Writing and Ideation
  • Research Assistance
  • Educational and Informational Queries
  • Casual Conversations and Entertainment

How to Use:

To get started with Nidum-Limitless-Gemma-2B, you can use the following sample code for testing:

import torch
from transformers import pipeline

pipe = pipeline(
    "text-generation",
    model="nidum/Nidum-Limitless-Gemma-2B",
    model_kwargs={"torch_dtype": torch.bfloat16},
    device="cuda",  # replace with "mps" to run on a Mac device
)

messages = [
    {"role": "user", "content": "who are you"},
]

outputs = pipe(messages, max_new_tokens=256)
assistant_response = outputs[0]["generated_text"][-1]["content"].strip()
print(assistant_response)

Release Date:

Nidum-Limitless-Gemma-2B is now officially available. Explore its capabilities and experience the freedom of unrestricted responses.

Contributing:

We welcome contributions to enhance the model or expand its functionalities. Details on how to contribute will be available in the coming updates.

Quantized Model Versions

To accommodate different hardware configurations and performance needs, Nidum-Limitless-Gemma-2B-GGUF is available in multiple quantized versions:

Model Version Description
Nidum-Limitless-Gemma-2B-Q2_K.gguf Optimized for minimal memory usage with lower precision. Suitable for resource-constrained environments.
Nidum-Limitless-Gemma-2B-Q4_K_M.gguf Balances performance and precision, offering faster inference with moderate memory usage.
Nidum-Limitless-Gemma-2B-Q8_0.gguf Provides higher precision with increased memory usage, suitable for tasks requiring more accuracy.
Nidum-Limitless-Gemma-2B-F16.gguf Full 16-bit floating point precision for maximum accuracy, ideal for high-end GPUs.

It is available here: https://huggingface.co/nidum/Nidum-Limitless-Gemma-2B-GGUF

Contact:

For any inquiries or further information, please contact us at info@nidum.ai.


Dive into limitless possibilities with Nidum-Limitless-Gemma-2B!

Special Thanks to @cognitivecomputations for inspiring us and scouting the best datasets that we could round up to make a rockstar model for you

Downloads last month
35
GGUF
Model size
2.51B params
Architecture
gemma

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .