Update app.py
Browse files
app.py
CHANGED
@@ -73,7 +73,7 @@ with demo:
|
|
73 |
"""<div style="text-align: center;"><h1> ⭐ Multilingual <span style='color: #e6b800;'>Code</span> Models <span style='color: #e6b800;'>Evaluation</span></h1></div>\
|
74 |
<br>\
|
75 |
<p>Inspired from the <a href="https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard">🤗 Open LLM Leaderboard</a> and <a href="https://huggingface.co/spaces/optimum/llm-perf-leaderboard">🤗 Open LLM-Perf Leaderboard 🏋️</a>, we compare performance of base multilingual code generation models on <a href="https://huggingface.co/datasets/openai_humaneval">HumanEval</a> benchmark and <a href="https://huggingface.co/datasets/nuprl/MultiPL-E">MultiPL-E</a>. We also measure throughput and provide\
|
76 |
-
information about the models. We only compare pre-trained multilingual code models, that people can start from as base models for their trainings.</p>"""
|
77 |
)
|
78 |
|
79 |
with gr.Tabs(elem_classes="tab-buttons") as tabs:
|
|
|
73 |
"""<div style="text-align: center;"><h1> ⭐ Multilingual <span style='color: #e6b800;'>Code</span> Models <span style='color: #e6b800;'>Evaluation</span></h1></div>\
|
74 |
<br>\
|
75 |
<p>Inspired from the <a href="https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard">🤗 Open LLM Leaderboard</a> and <a href="https://huggingface.co/spaces/optimum/llm-perf-leaderboard">🤗 Open LLM-Perf Leaderboard 🏋️</a>, we compare performance of base multilingual code generation models on <a href="https://huggingface.co/datasets/openai_humaneval">HumanEval</a> benchmark and <a href="https://huggingface.co/datasets/nuprl/MultiPL-E">MultiPL-E</a>. We also measure throughput and provide\
|
76 |
+
information about the models. We only compare open pre-trained multilingual code models, that people can start from as base models for their trainings.</p>"""
|
77 |
)
|
78 |
|
79 |
with gr.Tabs(elem_classes="tab-buttons") as tabs:
|