AnalogSeeker: An Open-source Foundation Language Model for Analog Circuit Design
This model, AnalogSeeker_2025_07_10_3
, is a fine-tuned version of Qwen2.5-32B-Instruct
. It was presented in the paper AnalogSeeker: An Open-source Foundation Language Model for Analog Circuit Design.
- Project Page: https://huggingface.co/analogllm/analogseeker
- GitHub Repository: https://github.com/analogllm/AnalogSeeker
Model description
AnalogSeeker is an open-source foundation language model specifically developed for analog circuit design. Its primary objective is to integrate specialized domain knowledge and provide design assistance in this complex field. To address the inherent scarcity of data in analog circuit design, AnalogSeeker employs a unique corpus collection strategy: high-quality, accessible textbooks across relevant subfields are systematically curated and cleaned into a textual domain corpus.
The model introduces a granular domain knowledge distillation method where raw, unlabeled domain corpus is decomposed into typical, granular learning nodes. A multi-agent framework is then utilized to distill implicit knowledge embedded in unstructured text into detailed question-answer data pairs, complete with detailed reasoning processes. This yields a fine-grained, learnable dataset used for fine-tuning. AnalogSeeker explores and shares novel training methods, establishing a fine-tuning-centric training paradigm and implementing a neighborhood self-constrained supervised fine-tuning algorithm to enhance training outcomes by constraining the perturbation magnitude between the model's output distributions.
Intended uses & limitations
Intended Uses: AnalogSeeker is intended for research use in the field of analog circuit design. It aims to:
- Integrate domain knowledge for analog circuits.
- Provide design assistance and answer domain-specific questions.
- Support tasks such as operational amplifier design.
- Serve as a foundation for further research and development in analog circuit LLMs.
Limitations: While AnalogSeeker demonstrates strong performance on analog circuit knowledge evaluation benchmarks, it is specialized for this domain. Its applicability and performance in other, unrelated domains may be limited. Users should be aware that, like all language models, it may occasionally generate incorrect or nonsensical information, especially for highly novel or unrepresented concepts within its training data.
Training and evaluation data
Training Data: The model was trained on a meticulously collected corpus based on the domain knowledge framework of analog circuits. This corpus consists of high-quality, accessible textbooks across relevant subfields, systematically curated and cleaned. A granular domain knowledge distillation method was applied, where raw text was decomposed into learning nodes, and a multi-agent framework distilled implicit knowledge into question-answer data pairs with detailed reasoning for fine-tuning.
Evaluation Data and Performance: AnalogSeeker was evaluated on AMSBench-TQA, the analog circuit knowledge evaluation benchmark. It achieved 85.04% accuracy, marking a significant 15.67% point improvement over the original Qwen2.5-32B-Instruct model and demonstrating competitive performance with mainstream commercial models.
Sample Usage
You can use this model with the Hugging Face transformers
library:
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
model_id = "analogllm/AnalogSeeker_2025_07_10_3"
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
trust_remote_code=True
)
# Example chat interaction (Qwen2.5 Instruct format)
messages = [
{"role": "user", "content": "What is the primary function of a common-emitter amplifier in analog circuits?"}
]
# Apply the chat template and prepare inputs
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
inputs = tokenizer(text, return_tensors='pt').to(model.device)
# Configure generation parameters
generation_config = GenerationConfig(
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.8,
repetition_penalty=1.05,
eos_token_id=[tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|im_end|>")] # Ensure it stops correctly
)
# Generate response
outputs = model.generate(
inputs=inputs.input_ids,
attention_mask=inputs.attention_mask,
generation_config=generation_config
)
# Decode and print the response
response = tokenizer.decode(outputs[0][inputs.input_ids.shape[1]:], skip_special_tokens=True)
print(response)
# Another example: design assistance
messages_design = [
{"role": "user", "content": "Explain the key considerations for designing a stable feedback amplifier."}
]
text_design = tokenizer.apply_chat_template(
messages_design,
tokenize=False,
add_generation_prompt=True
)
inputs_design = tokenizer(text_design, return_tensors='pt').to(model.device)
outputs_design = model.generate(
inputs=inputs_design.input_ids,
attention_mask=inputs_design.attention_mask,
generation_config=generation_config
)
response_design = tokenizer.decode(outputs_design[0][inputs_design.input_ids.shape[1]:], skip_special_tokens=True)
print(response_design)
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- total_eval_batch_size: 64
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1.0
Training results
{
"epoch": 1.0,
"num_input_tokens_seen": 113180672,
"total_flos": 759612479373312.0,
"train_loss": 1.1406613362056237,
"train_runtime": 17617.7573,
"train_samples_per_second": 0.784,
"train_steps_per_second": 0.012
}
Framework versions
- Transformers 4.52.4
- Pytorch 2.5.1+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
Citation
If you find AnalogSeeker useful in your research, please consider citing the original paper:
@article{analogseeker2025,
title={AnalogSeeker: An Open-source Foundation Language Model for Analog Circuit Design},
author={AnalogSeeker Team}, # Author information not provided in the prompt's paper details, so a placeholder like 'AnalogSeeker Team' or 'Anonymous' is often used if no specific author list is given. In this case, 'AnalogSeeker Team' seems appropriate from the context.
journal={arXiv preprint arXiv:2508.10409},
year={2025}, # Year not directly stated, assuming from the paper ID 2508.10409 which implies 2025.
url={https://huggingface.co/papers/2508.10409},
}
- Downloads last month
- 114