metadata
language:
- en
license: apache-2.0
tags:
- n8n
- workflow
- code-generation
- qwen2.5
- lora
- workflow-automation
- typescript
base_model: Qwen/Qwen2.5-Coder-1.5B-Instruct
pipeline_tag: text-generation
model-index:
- name: n8n-workflow-generator
results:
- task:
type: text-generation
name: Workflow Generation
metrics:
- type: accuracy
value: 91.8
name: Overall Test Score
π n8n Workflow Generator v1.0
A fine-tuned Qwen2.5-Coder-1.5B model that generates n8n workflows using TypeScript DSL.
π Performance (Comprehensive Testing)
Overall Score: 91.8% β¨ (24 diverse test cases)
Detailed Results by Category:
| Category | Score | Tests |
|---|---|---|
| Simple Webhook | 92.2% | 3 |
| Conditional Routing | 93.3% | 3 |
| Scheduled Tasks | 95.6% | 3 |
| Form Processing | 93.3% | 2 |
| Multi-Service Integration | 83.3% | 3 |
| Data Processing | 93.3% | 3 |
| Error Handling | 88.9% | 3 |
| Complex Multi-Step | 91.7% | 2 |
| Manual & Email Triggers | 96.7% | 2 |
Test Score Breakdown:
- Basic Checks: 98% (syntax, structure, node types)
- Structural Checks: 83% (connections, flow logic)
- N8N-Specific: 97% (valid nodes, DSL conventions)
Grade Distribution:
- π’ A (Excellent): 83% of test cases
- π‘ B (Good): 13% of test cases
- π΄ D (Poor): 4% of test cases
π― What It Does
Converts natural language descriptions into production-ready n8n workflows:
Input: "Create a webhook that sends data to Slack"
Output:
const workflow = new Workflow('Webhook to Slack');
const webhook = workflow.add('n8n-nodes-base.webhook', {{
path: '/data',
method: 'POST'
}});
const slack = workflow.add('n8n-nodes-base.slack', {{
channel: '#general',
text: '={{{{ $json.message }}}}'
}});
webhook.to(slack);
π Quick Start
Option 1: Using LoRA Adapter (Recommended)
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch
# Load base model
base_model = AutoModelForCausalLM.from_pretrained(
"Qwen/Qwen2.5-Coder-1.5B-Instruct",
torch_dtype=torch.float16,
device_map="auto"
)
# Load fine-tuned adapter
model = PeftModel.from_pretrained(base_model, "Nishan30/n8n-workflow-generator")
tokenizer = AutoTokenizer.from_pretrained("Nishan30/n8n-workflow-generator")
# System prompt
system_prompt = """You are an expert n8n workflow generator. Given a user's request, you generate clean, functional TypeScript code using the @n8n-generator/core DSL.
Your output should:
- Only contain the code, no explanations
- Use the Workflow class from @n8n-generator/core
- Use workflow.add() to create nodes
- Use .to() or workflow.connect() for connections
- Be ready to compile directly to n8n JSON"""
# Generate
user_request = "Create a webhook that sends data to Slack"
messages = [
{{"role": "system", "content": system_prompt}},
{{"role": "user", "content": user_request}}
]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt").to(model.device)
outputs = model.generate(
**inputs,
max_new_tokens=512,
temperature=0.3,
do_sample=True,
top_p=0.9,
repetition_penalty=1.1
)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(result)
Option 2: Using Transformers Pipeline
from transformers import pipeline
generator = pipeline(
"text-generation",
model="Nishan30/n8n-workflow-generator",
device_map="auto"
)
prompt = "Create a scheduled workflow that fetches data daily and sends to Slack"
result = generator(prompt, max_new_tokens=512, temperature=0.3)
print(result[0]['generated_text'])
π Supported Workflow Patterns
β Triggers
webhook- HTTP endpointsscheduleTrigger- Cron-based schedulingmanualTrigger- Manual executionformTrigger- Form submissionsemailTrigger- Email-based triggers
β Actions & Integrations
slack,discord,telegram- Messaginggmail,email- Email sendinghttpRequest- API callsgoogleSheets,airtable,notion- Databases- And more...
β Data Processing
if,switch- Conditional logicset,filter,merge- Data transformationcode- Custom JavaScript/PythonstopAndError- Error handling
π Training Details
Dataset
- Total Examples: 2,736 workflows
- Training Set: 2,462 examples
- Validation Set: 274 examples
- Pattern Coverage: 7 major workflow patterns
- Quality: Curated from production n8n workflows
Training Configuration
- Base Model: Qwen/Qwen2.5-Coder-1.5B-Instruct
- Method: LoRA (Low-Rank Adaptation)
- LoRA Rank: 16
- LoRA Alpha: 16
- Learning Rate: 2e-4
- Batch Size: 2 (effective: 8 with gradient accumulation)
- Epochs: 10
- Hardware: NVIDIA Tesla T4 GPU (16GB)
- Framework: Transformers + Unsloth
Optimization
- β 4-bit quantization for memory efficiency
- β Gradient checkpointing
- β Flash Attention 2
- β Early stopping based on validation loss
π¨ Example Workflows
1. Simple Webhook to Slack
User: "Create a webhook that posts to Slack"
Model: [Generates complete TypeScript DSL code]
2. Scheduled Data Sync
User: "Daily workflow that fetches API data and stores in database"
Model: [Generates schedule trigger + HTTP request + database storage]
3. Form Processing
User: "Contact form that validates and sends email"
Model: [Generates form trigger + validation + email sending]
4. Conditional Routing
User: "Route high-priority items to #urgent, others to #general"
Model: [Generates webhook + if condition + dual Slack outputs]
π Try It Online
Hugging Face Space: [Coming Soon]
π Benchmark Comparison
| Model | Size | Accuracy | Speed | Use Case |
|---|---|---|---|---|
| n8n-workflow-generator | 1.5B | 91.8% | Fast | Production-ready |
| GPT-3.5 (baseline) | 175B | ~85% | Slow | General purpose |
| CodeLlama-7B | 7B | ~88% | Medium | Code generation |
π§ Advanced Usage
Custom System Prompt
custom_prompt = """You are a workflow expert. Generate n8n workflows with:
- Error handling for all HTTP requests
- Descriptive node names
- Production-ready configurations
"""
Batch Generation
requests = [
"webhook to slack",
"daily email report",
"form to database"
]
for req in requests:
workflow = generate_workflow(req)
print(workflow)
Integration with n8n
import json
from n8n_generator import compile_to_json
# Generate DSL
dsl_code = model.generate(prompt)
# Compile to n8n JSON
workflow_json = compile_to_json(dsl_code)
# Import to n8n
# POST to http://your-n8n-instance/api/v1/workflows
π Limitations
- Complex Logic: May struggle with very complex multi-branch workflows (>10 nodes)
- Custom Nodes: Only supports built-in n8n nodes
- Edge Cases: Occasionally generates invalid node names (~8% of cases)
Mitigation: Add post-processing validation layer (see documentation)
π§ Roadmap
- v1.1: Expand to 7B model for better accuracy (target: 95%+)
- v1.2: Add support for custom n8n nodes
- v1.3: Multi-language support (Python, JavaScript execution nodes)
- v2.0: Fine-tune on user feedback data
π License
Apache 2.0
π Acknowledgments
Built with:
- Qwen2.5-Coder by Alibaba Cloud
- Hugging Face Transformers
- PEFT for LoRA
- Unsloth for training optimization
π§ Contact
- Issues: GitHub Issues
- Discussions: Hugging Face Discussions
β Star this model if you find it useful!
Last updated: December 2024