Spaces:
Running
Running
metadata
title: Optillm
emoji: 💬
colorFrom: yellow
colorTo: purple
sdk: gradio
sdk_version: 4.36.1
app_file: app.py
pinned: false
license: apache-2.0
References
- Chain-of-Thought Reasoning Without Prompting
- Re-Reading Improves Reasoning in Large Language Models
- In-Context Principle Learning from Mistakes
- Planning In Natural Language Improves LLM Search For Code Generation
- Self-Consistency Improves Chain of Thought Reasoning in Language Models
- Mutual Reasoning Makes Smaller LLMs Stronger Problem-Solvers
- Mixture-of-Agents Enhances Large Language Model Capabilities
- Prover-Verifier Games improve legibility of LLM outputs
- Monte Carlo Tree Search Boosts Reasoning via Iterative Preference Learning
- Unsupervised Evaluation of Code LLMs with Round-Trip Correctness
- Patched MOA: optimizing inference for diverse software development tasks
- Patched RTC: evaluating LLMs for diverse software development tasks