Regex-AI-Llama-3.2-1B

This model is a fine-tuned version of meta-llama/Llama-3.2-1B-Instruct specifically trained on the phongo/RegEx dataset for generating regular expressions. It aims to provide accurate and efficient regex solutions based on natural language descriptions of the desired pattern.

Model Description

  • Architecture: This model leverages the Llama-3.2-1B architecture, a powerful language model developed by Meta. It's been further specialized for regex generation through fine-tuning.
  • Training Data: The model was trained on the phongo/RegEx dataset, which contains pairs of natural language descriptions and corresponding regular expressions.
  • Fine-tuning: The base Llama model was fine-tuned using a supervised learning approach on the regex dataset. Specific training details (e.g., hyperparameters, training duration) are not available but assumed to be standard fine-tuning practices.
  • Intended Use: This model is intended to assist users in generating regular expressions. It is particularly helpful for users who may be less familiar with regex syntax or need help translating a complex textual description into a working regex pattern.

Intended uses & limitations

This model is intended for generating regular expressions based on natural language descriptions. While it strives for accuracy, it's important to test the generated regex thoroughly. Like all language models, it may occasionally produce incorrect or suboptimal results. The model is not responsible for the usage of the generated regexes.

Limitations:

  • Complexity: The model may struggle with extremely complex or nuanced regex patterns.
  • Ambiguity: Ambiguous natural language descriptions can lead to inaccurate regexes. Be as precise and clear as possible in your prompts.
  • Edge Cases: The model might not cover all possible edge cases in regex syntax.
  • Security: Always validate and sanitize generated regexes before using them in production environments to prevent potential security vulnerabilities (e.g., ReDoS attacks).

How to use

You can use this model with the following code:

from llama_cpp import Llama

llm = Llama.from_pretrained(
    repo_id="mishl/Regex-AI-Llama-3.2-1B", 
    filename="unsloth.Q4_K_M.gguf", # Or unsloth.F16.gguf
)

llm.create_chat_completion(
    messages = [
        {
            "role": "user",
            "content": "Create regex for masked links like this [website](www.example.com)"
        }
    ]
)
Downloads last month
544
GGUF
Model size
1.24B params
Architecture
llama

4-bit

16-bit

Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for mishl/Regex-AI-Llama-3.2-1B

Quantized
(141)
this model

Dataset used to train mishl/Regex-AI-Llama-3.2-1B