Use HF papers
#2
by
qgallouedec
HF staff
- opened
- app/src/index.html +1 -1
app/src/index.html
CHANGED
@@ -55,7 +55,7 @@
|
|
55 |
|
56 |
<h2 id="1591384e-bcac-809f-b7ce-d414b4c0df4e" class="">Strategies for test-time compute scaling</h2><p id="1591384e-bcac-8021-a784-d3340af0adb4" class="">There are two main strategies for scaling test-time compute:</p><ul id="1591384e-bcac-8060-8262-d07e6f3cb300" class="bulleted-list"><li style="list-style-type:disc"><strong>Self-Refinement: </strong>Models iteratively refine their own outputs or “thoughts” by identifying and correcting errors in subsequent iterations. While effective on some tasks, this strategy usually requires models to have built-in mechanisms for self-refinement, which can limit its applicability.</li></ul><ul id="1591384e-bcac-8020-a30a-e82811cb36cc" class="bulleted-list"><li style="list-style-type:disc"><strong>Search Against a Verifier: </strong>This approach focuses on generating multiple candidate answers and using verifier to select the best one. A verifier can be anything from a hard-coded heuristic to a learned reward model, but for the purposes of this blog post we will focus on learned verifiers. It includes techniques such as Best-of-N sampling and tree search. Search strategies are more flexible and can adapt to the difficulty of the problem, although their performance is constrained by the quality of the verifier.</li></ul><p id="1591384e-bcac-801d-82e3-d2dc50cf2b24" class="">In this blog post, we’ll concentrate on search-based methods as they represent a practical and scalable solution for test-time compute optimization. In particular, we’ll examine the three strategies illustrated below:</p><figure id="15b1384e-bcac-80df-a57e-e08ddb80ec8c" class="image"><a href="https://huggingface.co/datasets/HuggingFaceH4/blogpost-images/resolve/main/search-strategies.png"><img style="width:700px" src="https://huggingface.co/datasets/HuggingFaceH4/blogpost-images/resolve/main/search-strategies.png"/></a></figure><ul id="15b1384e-bcac-807b-a75f-e0b421bd6ee3" class="bulleted-list"><li style="list-style-type:disc"><strong>Best-of-N: </strong>Generate multiple responses per problem and assign scores to each candidate answer, typically using a reward model. Then select the answer with the highest reward (or a weighted variant discussed later). This approach emphasizes answer quality over frequency.</li></ul><ul id="15b1384e-bcac-800c-af0a-d4e4df5a003a" class="bulleted-list"><li style="list-style-type:disc"><strong>Beam search: </strong>A systematic search method that explores the solution space, often combined with a <em><a href="https://huggingface.co/papers/2211.14275">process reward model (PRM)</a></em> to optimise both the sampling and evaluation of intermediate steps in problem-solving. Unlike conventional reward models that produce a single score on the final answer, PRMs provide a <em>sequence </em>of scores, one for each step of the reasoning process. This ability to provide fine-grained feedback makes PRMs a natural fit for search methods with LLMs.</li></ul><ul id="15b1384e-bcac-80c9-91e2-e013fee74ec6" class="bulleted-list"><li style="list-style-type:disc"><strong>Diverse verifier tree search (DVTS): </strong>An extension of beam search we developed that splits the initial beams into independent subtrees, which are then expanded greedily using a PRM. This method improves solution diversity and overall performance, particularly with larger test-time compute budgets.</li></ul><p id="15a1384e-bcac-803c-bc89-ed15f18eafdc" class="">With an understanding of the key search strategies, let’s move on to how we evaluated them in practice.</p>
|
57 |
|
58 |
-
<h2 id="15a1384e-bcac-80e2-8506-e130c2c69407" class="">Experimental setup</h2><figure id="15c1384e-bcac-8096-9f01-f2c0d388ed1d" class="image"><a href="https://huggingface.co/datasets/HuggingFaceH4/blogpost-images/resolve/main/system.png"><img style="width:750px" src="https://huggingface.co/datasets/HuggingFaceH4/blogpost-images/resolve/main/system.png"/></a></figure><p id="15a1384e-bcac-802a-b133-e279ff948bd0" class="">As illustrated in the diagram above, our experimental setup involves a pipeline with the following steps:</p><ol type="1" id="15c1384e-bcac-806e-b372-df95fdad14c8" class="numbered-list" start="1"><li>We begin by feeding a math problem to an LLM, which generates <style>@import url('https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.16.9/katex.min.css')</style><span data-token-index="0" contenteditable="false" class="notion-text-equation-token" style="user-select:all;-webkit-user-select:all;-moz-user-select:all"><span></span><span><span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>N</mi></mrow><annotation encoding="application/x-tex">N</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.6833em;"></span><span class="mord mathnormal" style="margin-right:0.10903em;">N</span></span></span></span></span><span></span></span> <em><strong>partial solutions</strong></em>, e.g. an intermediate step in a derivation.</li></ol><ol type="1" id="15c1384e-bcac-8045-9a74-d1c2ca472aa3" class="numbered-list" start="2"><li>Each step is scored by a PRM, which estimates the probability of each step to eventually reach the correct final answer. <ol type="a" id="15c1384e-bcac-80c5-84ea-f8f6c00b8e65" class="numbered-list" start="1"><li>The steps and PRM scores are then used by a given search strategy to select which partial solutions should be further explored to generate the next round of intermediate steps.</li></ol></li></ol><ol type="1" id="15c1384e-bcac-803a-9772-ce27e91553c0" class="numbered-list" start="3"><li>Once the search strategy terminates, the final candidate solutions are ranked by the PRM to produce the final answer.</li></ol><p id="15c1384e-bcac-80af-a31e-fd3330874674" class="">To compare various search strategies, we used the following open models and datasets:</p><ul id="15a1384e-bcac-80d3-afcf-e5741e06845d" class="bulleted-list"><li style="list-style-type:disc"><strong>Model:</strong> We used <code>meta-llama/Llama-3.2-1B-Instruct</code> as our primary model for scaling test-time compute. With 1B parameters, its lightweight nature enables fast iterations, and its unsaturated performance on math benchmarks makes it an ideal choice for highlighting the benefits of scaling.</li></ul><ul id="15a1384e-bcac-807f-81fb-cd43b2273acf" class="bulleted-list"><li style="list-style-type:disc"><strong>Process reward model: </strong>To guide our search strategies, we used <code>RLHFlow/Llama3.1-8B-PRM-Deepseek-Data</code>, an 8B reward model that has been trained using <em>process supervision</em>. Process supervision is a training approach where models receive feedback on each step of their reasoning process, not just the final outcome. We picked this model since it belongs to the same model family as our policy and gave better results than other PRMs like <a href="https://huggingface.co/peiyi9979/math-shepherd-mistral-7b-prm">Math-Shepherd</a> we tested in this weight class.</li></ul><ul id="15a1384e-bcac-80da-8c95-d6ca809696a8" class="bulleted-list"><li style="list-style-type:disc"><strong>Dataset: </strong>We evaluated on the<a href="https://huggingface.co/datasets/HuggingFaceH4/MATH-500"> MATH-500 subset</a> of the <a href="https://
|
59 |
|
60 |
<iframe
|
61 |
src="https://huggingface.co/datasets/HuggingFaceH4/MATH-500/embed/viewer/default/test"
|
|
|
55 |
|
56 |
<h2 id="1591384e-bcac-809f-b7ce-d414b4c0df4e" class="">Strategies for test-time compute scaling</h2><p id="1591384e-bcac-8021-a784-d3340af0adb4" class="">There are two main strategies for scaling test-time compute:</p><ul id="1591384e-bcac-8060-8262-d07e6f3cb300" class="bulleted-list"><li style="list-style-type:disc"><strong>Self-Refinement: </strong>Models iteratively refine their own outputs or “thoughts” by identifying and correcting errors in subsequent iterations. While effective on some tasks, this strategy usually requires models to have built-in mechanisms for self-refinement, which can limit its applicability.</li></ul><ul id="1591384e-bcac-8020-a30a-e82811cb36cc" class="bulleted-list"><li style="list-style-type:disc"><strong>Search Against a Verifier: </strong>This approach focuses on generating multiple candidate answers and using verifier to select the best one. A verifier can be anything from a hard-coded heuristic to a learned reward model, but for the purposes of this blog post we will focus on learned verifiers. It includes techniques such as Best-of-N sampling and tree search. Search strategies are more flexible and can adapt to the difficulty of the problem, although their performance is constrained by the quality of the verifier.</li></ul><p id="1591384e-bcac-801d-82e3-d2dc50cf2b24" class="">In this blog post, we’ll concentrate on search-based methods as they represent a practical and scalable solution for test-time compute optimization. In particular, we’ll examine the three strategies illustrated below:</p><figure id="15b1384e-bcac-80df-a57e-e08ddb80ec8c" class="image"><a href="https://huggingface.co/datasets/HuggingFaceH4/blogpost-images/resolve/main/search-strategies.png"><img style="width:700px" src="https://huggingface.co/datasets/HuggingFaceH4/blogpost-images/resolve/main/search-strategies.png"/></a></figure><ul id="15b1384e-bcac-807b-a75f-e0b421bd6ee3" class="bulleted-list"><li style="list-style-type:disc"><strong>Best-of-N: </strong>Generate multiple responses per problem and assign scores to each candidate answer, typically using a reward model. Then select the answer with the highest reward (or a weighted variant discussed later). This approach emphasizes answer quality over frequency.</li></ul><ul id="15b1384e-bcac-800c-af0a-d4e4df5a003a" class="bulleted-list"><li style="list-style-type:disc"><strong>Beam search: </strong>A systematic search method that explores the solution space, often combined with a <em><a href="https://huggingface.co/papers/2211.14275">process reward model (PRM)</a></em> to optimise both the sampling and evaluation of intermediate steps in problem-solving. Unlike conventional reward models that produce a single score on the final answer, PRMs provide a <em>sequence </em>of scores, one for each step of the reasoning process. This ability to provide fine-grained feedback makes PRMs a natural fit for search methods with LLMs.</li></ul><ul id="15b1384e-bcac-80c9-91e2-e013fee74ec6" class="bulleted-list"><li style="list-style-type:disc"><strong>Diverse verifier tree search (DVTS): </strong>An extension of beam search we developed that splits the initial beams into independent subtrees, which are then expanded greedily using a PRM. This method improves solution diversity and overall performance, particularly with larger test-time compute budgets.</li></ul><p id="15a1384e-bcac-803c-bc89-ed15f18eafdc" class="">With an understanding of the key search strategies, let’s move on to how we evaluated them in practice.</p>
|
57 |
|
58 |
+
<h2 id="15a1384e-bcac-80e2-8506-e130c2c69407" class="">Experimental setup</h2><figure id="15c1384e-bcac-8096-9f01-f2c0d388ed1d" class="image"><a href="https://huggingface.co/datasets/HuggingFaceH4/blogpost-images/resolve/main/system.png"><img style="width:750px" src="https://huggingface.co/datasets/HuggingFaceH4/blogpost-images/resolve/main/system.png"/></a></figure><p id="15a1384e-bcac-802a-b133-e279ff948bd0" class="">As illustrated in the diagram above, our experimental setup involves a pipeline with the following steps:</p><ol type="1" id="15c1384e-bcac-806e-b372-df95fdad14c8" class="numbered-list" start="1"><li>We begin by feeding a math problem to an LLM, which generates <style>@import url('https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.16.9/katex.min.css')</style><span data-token-index="0" contenteditable="false" class="notion-text-equation-token" style="user-select:all;-webkit-user-select:all;-moz-user-select:all"><span></span><span><span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>N</mi></mrow><annotation encoding="application/x-tex">N</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.6833em;"></span><span class="mord mathnormal" style="margin-right:0.10903em;">N</span></span></span></span></span><span></span></span> <em><strong>partial solutions</strong></em>, e.g. an intermediate step in a derivation.</li></ol><ol type="1" id="15c1384e-bcac-8045-9a74-d1c2ca472aa3" class="numbered-list" start="2"><li>Each step is scored by a PRM, which estimates the probability of each step to eventually reach the correct final answer. <ol type="a" id="15c1384e-bcac-80c5-84ea-f8f6c00b8e65" class="numbered-list" start="1"><li>The steps and PRM scores are then used by a given search strategy to select which partial solutions should be further explored to generate the next round of intermediate steps.</li></ol></li></ol><ol type="1" id="15c1384e-bcac-803a-9772-ce27e91553c0" class="numbered-list" start="3"><li>Once the search strategy terminates, the final candidate solutions are ranked by the PRM to produce the final answer.</li></ol><p id="15c1384e-bcac-80af-a31e-fd3330874674" class="">To compare various search strategies, we used the following open models and datasets:</p><ul id="15a1384e-bcac-80d3-afcf-e5741e06845d" class="bulleted-list"><li style="list-style-type:disc"><strong>Model:</strong> We used <code>meta-llama/Llama-3.2-1B-Instruct</code> as our primary model for scaling test-time compute. With 1B parameters, its lightweight nature enables fast iterations, and its unsaturated performance on math benchmarks makes it an ideal choice for highlighting the benefits of scaling.</li></ul><ul id="15a1384e-bcac-807f-81fb-cd43b2273acf" class="bulleted-list"><li style="list-style-type:disc"><strong>Process reward model: </strong>To guide our search strategies, we used <code>RLHFlow/Llama3.1-8B-PRM-Deepseek-Data</code>, an 8B reward model that has been trained using <em>process supervision</em>. Process supervision is a training approach where models receive feedback on each step of their reasoning process, not just the final outcome. We picked this model since it belongs to the same model family as our policy and gave better results than other PRMs like <a href="https://huggingface.co/peiyi9979/math-shepherd-mistral-7b-prm">Math-Shepherd</a> we tested in this weight class.</li></ul><ul id="15a1384e-bcac-80da-8c95-d6ca809696a8" class="bulleted-list"><li style="list-style-type:disc"><strong>Dataset: </strong>We evaluated on the<a href="https://huggingface.co/datasets/HuggingFaceH4/MATH-500"> MATH-500 subset</a> of the <a href="https://huggingface.co/papers/2103.03874">MATH benchmark</a>, a dataset released by OpenAI as part of their <a href="https://huggingface.co/papers/2305.20050">research</a> on process supervision. These math problems span seven subjects and are challenging for both humans and most LLMs. Take a look at the dataset viewer below to get a taste for the problem difficulty!
|
59 |
|
60 |
<iframe
|
61 |
src="https://huggingface.co/datasets/HuggingFaceH4/MATH-500/embed/viewer/default/test"
|