File size: 5,446 Bytes
08818c1 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 |
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<style>
body {
background-color: #111;
font-family: Arial, sans-serif;
color: #fff;
display: flex;
justify-content: center;
align-items: center;
flex-direction: column;
height: 100vh;
margin: 0;
}
h1 {
font-size: 36px;
margin-bottom: 10px;
}
h2 {
font-size: 18px;
font-weight: normal;
margin-bottom: 30px;
color: #ccc;
}
table {
width: 90%;
border-collapse: separate;
border-spacing: 0;
background-color: #1b1b1b;
border-radius: 12px;
overflow: hidden;
margin: 20px 0;
table-layout: fixed;
}
th, td {
text-align: center;
padding: 12px;
border: 1px solid #333;
vertical-align: middle;
}
th {
background-color: #222;
font-weight: bold;
font-size: 14px;
}
td {
background-color: #1b1b1b;
font-size: 14px;
word-wrap: break-word;
}
.highlight-column {
border-left: 3px solid #0066ff;
border-right: 3px solid #0066ff;
}
.highlight-header {
border-top: 3px solid #0066ff;
border-top-left-radius: 12px;
border-top-right-radius: 12px;
}
.highlight-footer {
border-bottom: 3px solid #0066ff;
border-bottom-left-radius: 12px;
border-bottom-right-radius: 12px;
}
.bold {
font-weight: 900; /* Extra bold */
}
tr:first-child th:first-child {
border-top-left-radius: 12px;
}
tr:first-child th:last-child {
border-top-right-radius: 12px;
}
tr:last-child td:first-child {
border-bottom-left-radius: 12px;
}
tr:last-child td:last-child {
border-bottom-right-radius: 12px;
}
.footnote {
font-size: 12px;
color: #888;
text-align: left;
max-width: 90%;
margin-top: 20px;
}
</style>
</head>
<body>
<h1>田忌赛马</h1>
<h2>Goodhart's Law on Benchmarks</h2>
<table>
<tr>
<th>Capability</th>
<th>Description</th>
<th class="highlight-column highlight-header">miniG</th>
<th>Gemini-Flash</th>
<th>GLM-4-9B-Chat</th>
<th>Llama 3.1 8B Instruct</th>
</tr>
<tr>
<td class="bold">MMLU</td>
<td>Representation of questions in 57 subjects<br>(incl. STEM, humanities, and others)</td>
<td class="highlight-column bold">85.45</td>
<td>78.9</td>
<td>72.4</td>
<td>69.4</td>
</tr>
<tr>
<td class="bold">IFEval</td>
<td>Evaluation of instruction-following<br>using verifiable prompts</td>
<td class="highlight-column">74.22</td>
<td>-</td>
<td>69</td>
<td class="bold">80.4</td>
</tr>
<tr>
<td class="bold">GSM8K</td>
<td>Challenging math problems<br>(5-shot evaluation)</td>
<td class="highlight-column">75.89 (5-shot)</td>
<td class="bold">86.2 (11-shot)</td>
<td>79.6</td>
<td>84.5 (8-shot CoT)</td>
</tr>
<tr>
<td class="bold">HumanEval</td>
<td>Python code generation on a held-out dataset<br>(0-shot)</td>
<td class="highlight-column bold">79.88</td>
<td>74.3</td>
<td>71.8</td>
<td>72.6</td>
</tr>
<tr>
<td class="bold">GPQA</td>
<td>Challenging dataset of questions<br>from biology, physics, and chemistry</td>
<td class="highlight-column">37.37</td>
<td class="bold">39.5</td>
<td>34.3 (base)</td>
<td>34.2</td>
</tr>
<tr>
<td class="bold">Context Window</td>
<td>Maximum context length<br>the model can handle</td>
<td class="highlight-column bold">1M</td>
<td class="bold">1M</td>
<td>128K</td>
<td>128K</td>
</tr>
<tr>
<td class="bold">Input</td>
<td>Supported input modalities</td>
<td class="highlight-column highlight-footer">Text, image<br>(single model)</td>
<td>Text, image, audio, video</td>
<td>Text only</td>
<td>Text only</td>
</tr>
</table>
<div class="footnote">
1. miniG is a 14B parameter model derived from the 9B parameter glm-4-9b-chat-1m model weights. It continues pre-training on a selected corpus of 20B tokens while retaining long-context capabilities. The model is fine-tuned on a dataset of 120M+ conversation entries, synthesized through cross-page clustering similar to RAG on this selected corpus. Additionally, miniG underwent multimodal training in two stages for single image input, with the second stage reinitializing 5B parameters of a Vision Transformer from glm-4v-9b for Locked-Image Tuning.<br>
2. miniG outputs are formatted similarly to Gemini 1.5 Flash but were not trained on data generated by the Gemini models.
</div>
</body>
</html> |