AI Lab - Sofia University

university

AI & ML interests

None defined yet.

Recent Activity

melaniabย  updated a dataset about 1 month ago
sofia-uni/toxic-data-bg
melaniabย  updated a dataset about 1 month ago
sofia-uni/toxic-onto-bg
melaniabย  updated a dataset about 2 months ago
sofia-uni/toxic-onto-bg
View all activity

sofia-uni's activity

s-emanuilovย 
posted an update about 21 hours ago
view post
Post
268
Just released a small collection of models for query expansion to improve the retrieval stage in search systems.

The collection includes:
โ€” Fine-tuned Qwen2.5 and Llama3.2 models
โ€” GGUF quantized versions
โ€” Dataset for training query expanders

Could be useful to someone:
s-emanuilov/query-expansion-678f2742c37d702adfe445e8
  • 2 replies
ยท
s-emanuilovย 
posted an update 6 days ago
view post
Post
419
A new benchmark (DPAB-ฮฑ) has been released that evaluates LLM function calling in both Pythonic and JSON approaches.

It shows that Pythonic function calling often outperforms traditional JSON-based methods, especially for complex multi-step tasks.

Key findings from benchmarks:
โ€” Claude 3.5 Sonnet leads with 87% on Pythonic vs 45% on JSON
โ€” Smaller models show impressive results (Dria-Agent-ฮฑ-3B: 72% Pythonic)
โ€” Even larger models like DeepSeek V3 (685B) show significant gaps (63% Pythonic vs 33% JSON)

If you're building or using LLM agents, these results suggest that how you implement function calling could impact performance - might be worth reconsidering JSON-only approaches.

The benchmark: https://github.com/firstbatchxyz/function-calling-eval
Blog post: https://huggingface.co/blog/andthattoo/dpab-a
s-emanuilovย 
posted an update 9 days ago
view post
Post
520
New paper from Salesforce AI Research. The authors found that joint training, continual pre-training (CPT), and instruction tuning with a 50/50 data split achieve better results than sequential training. Their 8B parameter model outperformed larger 70B models on financial tasks.

Down-sampling CPT data to match IT data size improved performance on CFA Challenge exams from 34.44% to 55.56%, while maintaining strong general knowledge capabilities as shown by comparable or better performance on general knowledge benchmarks like AI2-ARC and MMLU.

Technical implementation involved two-stage training: Group 1 utilized 3.84B tokens from web and basic texts, followed by Group 2, which used 1.66B tokens from domain-specific books. Their preference alignment method used generative reward models to identify and correct reasoning errors rather than just rating full solutions.

Evaluation on 91,872 samples across 31 tasks showed their Llama-Fin model achieving 91.13% accuracy on sentiment analysis (FPB) and 95.32% on FiQA SA, exceeding GPT-4's performance of 82.16% and 68.51%, respectively, on these benchmarks.

It could be useful for many financial companies looking to build AI pipelines.

Interesting read, but neither the model nor GitHub repo is accessible yet. The key insight for AI builders is that with small models - it is fully possible to outperform much bigger models.

https://arxiv.org/abs/2501.04961
s-emanuilovย 
posted an update 20 days ago
view post
Post
2569
Hey HF community! ๐Ÿ‘‹

Excited to share Monkt - a tool I built to solve the eternal headache of processing documents for ML/AI pipelines.

What it does: Converts PDFs, Word, PowerPoint, Excel, Web pages or raw HTML into clean Markdown or structured JSON.

Great for:
โœ” LLM training dataset preparation;
โœ” Knowledge base construction;
โœ” Research paper processing;
โœ” Technical documentation management.

It has API access for integration into ML pipelines.

Check it out at https://monkt.com/ if you want to save time on document processing infrastructure.

Looking forward to your feedback!
  • 3 replies
ยท
melaniabย 
updated a Space 3 months ago