doc_id
stringlengths 36
36
| contents
stringlengths 22
3.25k
| metadata
dict |
---|---|---|
8242b4b9-14be-44fe-9928-3b828d8769de | # Codemind: A Framework To Challenge Large Language Models For Code Reasoning
## 3.3. Evaluating Code Reasoning
We measure the performance of models in code reasoning for a given code with the _Correct Reasoning Score_ (_CRS_), which is 1 if the model can correctly reason about the code and 0 otherwise. We also introduce the Correct Reasoning Rate (CRR) metric, a collective metric that measures how much a given LLM can reason about multiple programs in a benchmark. We calculate CRR for a set of $m$ programs in benchmark $P$ as:
$$CRR(P)=\frac{\sum\limits_{i=1}^{m}[CRS(p_{i}\in P)=1]}{m}$$ | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09664v2.md",
"file_path": "paper_data/2402.09664v2.md",
"file_size": 54178,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
54899ab7-060e-4cbf-8b50-76f765f2dc92 | # Codemind: A Framework To Challenge Large Language Models For Code Reasoning
## 4. Experimental Setup
Our study includes nine LLMs and 5395 programs in Java and Python programming languages from five programming datasets. We explain the details of LLMs and program selection below.
Subject LLMs. We chose nine pre-trained or instructiontuned models, covering both general-purpose and Code LLMs. Our choice was limited to computing resources, so we selected models with less than 20B parameters that outperform the rest for programming tasks. Our subject LLMs are GPT-4 (OpenAI, 2023b), GPT-3.5 (OpenAI, 2023a), Llama 2 (13B) (Touvron et al., 2023), Mistral (Jiang et al., 2023), CodeLlama (13B, instruction-tuned)(Roziere et al., 2023), StarCoder (15.5B)(Li et al., 2023), Wizard- Coder (15B, instruction-tuned)(Xu et al., 2023), Magic- Coder (7B)(Wei et al., 2023) (instruction-tuned), DeepSeek- Coder (6.7B)(Bi et al., 2024). We followed the best practices and customized the prompt templates per each model
(all prompts are publicly available for further investigation (CodeMind, 2024)). Except for the GPT models, we set the temperature to zero to ensure the reproducibility of the results. Our code is open-source to users for using CodeMind for other models and temperatures.
Subject Programs. Our criteria for selecting subject programs were the existence of test data (inputs and corresponding expected output) and implementations of the same program in multiple programming languages (to investigate its impact on code reasoning). From several existing benchmarks (Wang et al., 2022; Athiwaratkun et al., 2022; Chen et al., 2021; Liu et al., 2023; Gu et al., 2024; Zheng et al., 2023; Cassano et al., 2022; Jimenez et al., 2023; Du et al., 2023; Odena et al., 2021; Puri et al., 2021; Ahmad et al.,
2021), we chose the programs in HumanEval (Chen et al., 2021), MBPP (Odena et al., 2021), CodeNet (Puri et al., 2021), Avatar (Ahmad et al., 2021), and CruxEval (Gu et al | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09664v2.md",
"file_path": "paper_data/2402.09664v2.md",
"file_size": 54178,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
0654ca4b-a4f6-4792-ba03-395507df8ee3 | # Codemind: A Framework To Challenge Large Language Models For Code Reasoning
## 4. Experimental Setup
2022; Chen et al., 2021; Liu et al., 2023; Gu et al., 2024; Zheng et al., 2023; Cassano et al., 2022; Jimenez et al., 2023; Du et al., 2023; Odena et al., 2021; Puri et al., 2021; Ahmad et al.,
2021), we chose the programs in HumanEval (Chen et al., 2021), MBPP (Odena et al., 2021), CodeNet (Puri et al., 2021), Avatar (Ahmad et al., 2021), and CruxEval (Gu et al., 2024). We chose Java and Python versions of the programs as they are more prominently used programming languages. HumanEval and MBPP are well-known benchmarks for code synthesis. CodeNet and Avatar are code translation benchmarks. CRUXEval is a benchmark of relatively simple Python programs generated by CodeLlama (34B) to evaluate input prediction and output prediction of LLMs. Figure 2 shows the complexity distribution of the programs in terms of *Cyclomatic Complexity*, CC (Gill & Kemerer,
1991), and *Lines of Code (LoC)*. CC measures the number of independent execution paths in the program control flow graph (CFG). The metric is computed for a class as CC =
E β N + 2P, where E and N are the number of edges and nodes in the CFG, respectively, and P is the number of methods in the class. In general, a higher CC indicates a more complex program. For code reasoning tasks, the model should reason which execution path to take for a given input to predict the output. So, the higher number of independent paths makes it unlikely for the model to succeed by chance. CC might be correlated with the number of lines in the program, but more lines do not cause higher CC. For example, a program with 10 lines and no conditional or loop constructs only has one execution path, while a program with 8 lines and two nested conditional statements has 3 or 4 execution paths, depending on the conditional predicates. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09664v2.md",
"file_path": "paper_data/2402.09664v2.md",
"file_size": 54178,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
77978a46-bcd8-4359-aeaa-2ec36ae671eb | # Codemind: A Framework To Challenge Large Language Models For Code Reasoning
## 5. Llm Evaluation On Ier
To evaluate the performance of LLMs on the IER task, we promoted them under two settings: direct answering and CoT. For direct answering, we prompted each model to predict the output of given inputs. Under the CoT setup, we first instruct the models to simulate the execution step by step by predicting the output value after execution of
Dataset
Programming
Language | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09664v2.md",
"file_path": "paper_data/2402.09664v2.md",
"file_size": 54178,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
4ca7dcce-d3ef-44a9-a17c-212f44a0fda4 | # Subjects
General LLMs
Code LLMs
GPT-4 GPT-3.5 Llama 2 Mistral CodeLlama DeepSeekCoder MagicCoder StarCoder WizardCoder
MBPP
Python
408
80.88% 71.32% 45.59% 31.37%
42.40%
57.84%
59.80%
43.63%
46.08%
HumanEval
Python
162
79.01% 64.20% 30.86% 32.72%
45.06%
41.98%
52.47%
38.89%
40.12%
CruxEval
Python
800
80.50% 65.13% 25.38% 34.13%
37.75%
44.38%
46.50%
35.50%
35.88%
CodeNet
Python
1914
70.43% 49.06% 18.97% 17.35%
27.95%
26.65%
33.28%
26.28%
24.87%
Java
1939
71.17% 51.93% 23.99% 18.15%
28.52%
32.13%
36.46%
29.34%
29.35%
Avatar
Python
86
52.33% 39.53% 24.42% 16.28%
23.26%
18.60%
24.42%
19.77%
24.42%
Java
86
48.84% 34.88% 23.26% 11.63%
27.91%
23.26%
24.42%
13.95%
13.95%
Total
Java and Python
5395
72.60% 54.24% 24.26% 21.54%
30.40%
33.85%
38.68%
30.14%
29.99%
each statement. We then ask the model to predict the output of given inputs. In both settings, the prompt contains one in-context example for two purposes: introducing the IER task and instructing the response formatting. Given that IER only requires an arbitrary code and corresponding ground-truth pair of β¨ΛI, Λoβ© ( | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09664v2.md",
"file_path": "paper_data/2402.09664v2.md",
"file_size": 54178,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
cf955e91-4ef1-40b7-8985-0eea837a0fdb | # Subjects
Total
Java and Python
5395
72.60% 54.24% 24.26% 21.54%
30.40%
33.85%
38.68%
30.14%
29.99%
each statement. We then ask the model to predict the output of given inputs. In both settings, the prompt contains one in-context example for two purposes: introducing the IER task and instructing the response formatting. Given that IER only requires an arbitrary code and corresponding ground-truth pair of β¨ΛI, Λoβ© (Β§3.1), we prompted the LLMs using all 5395 subject programs in this experiment.
Table 1 shows the result of this experiment through CoT
prompting. From these results, we observe that:
- GPT models outperform others on the IER task, with large
margins of 33.92% (GPT-4) and 15.56 (GPT-3.5) from the best open-source model. Among the open-source models, except for the Avatar dataset, MagicCoder outperforms others with an average margin of 4.83%.
- On the datasets with samples in both Java and Python, all
the models experience a performance drop (average drop of 2.91% in CodeNet and 2.33% in Avatar). This is likely because Java enforces a stricter syntax and typing system than Python, making the code execution reasoning harder.
- Compared to direct answering, CoT prompting, under
which the models articulate the execution process verbally
before predicting the output, results in 5.24% improvement in the IER performance of the models on average. However, the less-than-ideal accuracy of (open-source) models, even with CoT prompting, demands a fundamental change.
- Moving down in the table, the models face greater challenges in IER, i.e., reasoning about the execution on CodeNet and Avatar programs, compared to MBPP, Hu-
manEval, and CRUXEval. One potential reason is the complexity of such programs as demonstrated in Figure 2.
A detailed breakdown of the model's performance (Figure 3) shows a *strong negative correlation*3 between CRR
and CC, confirming that models struggle more in IER for a more complex code. At the same time, some models, namely Llama | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09664v2.md",
"file_path": "paper_data/2402.09664v2.md",
"file_size": 54178,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
3ff991e4-0260-4d6a-bbfb-76d74e92abca | # Subjects
T prompting, demands a fundamental change.
- Moving down in the table, the models face greater challenges in IER, i.e., reasoning about the execution on CodeNet and Avatar programs, compared to MBPP, Hu-
manEval, and CRUXEval. One potential reason is the complexity of such programs as demonstrated in Figure 2.
A detailed breakdown of the model's performance (Figure 3) shows a *strong negative correlation*3 between CRR
and CC, confirming that models struggle more in IER for a more complex code. At the same time, some models, namely Llama 2, CodeLlama, MagicCoder, StarCoder, and WizardCoder, achieve a lower performance on CRUX- Eval compared to HumanEval, which are less complex regarding both LoC and CC. This entails a further better understanding of what factors other than CC impact the CRR performance of the models (Β§8). | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09664v2.md",
"file_path": "paper_data/2402.09664v2.md",
"file_size": 54178,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
337ca6e3-dc63-4d7e-ad20-52f71b7347c1 | # Subjects
## 6. Llm Evaluation On Der
We seek to address the critical question of how effectively the model can correctly reason about the correct programs it has generated. This evaluation requires us to align code synthesis and code reasoning tasks together. Our pipeline for evaluating DER consists of three steps: (1) following the best practices, we prompted subject LLMs for code synthesis; (2) we ran the synthesized program against existing tests; and (3) for the programs with test pass, we prompted the model for code execution reasoning using the chosen test input and under CoT style. Note that we also removed the comments from the synthesized code for fairness. We excluded the programs from CRUXEval, CodeNet, and Avatar,
3Spearman's Rank Order Correlation (ROC) (Spearman, 1961)
between CC and CRR.
| Dataset | # Subjects | Task |
|-------------------------|--------------|------------------------------------------------------------------|
| General LLMs | Code LLMs | |
| GPT-4 | GPT-3.5 | Mistral CodeLlama DeepSeekCoder MagicCoder StarCoder WizardCoder |
| MBPP | 408 | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09664v2.md",
"file_path": "paper_data/2402.09664v2.md",
"file_size": 54178,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
5da2d0f4-46c4-4385-afdb-c480721e1cdb | # Subjects
## 6. Llm Evaluation On Der
-3.5 | Mistral CodeLlama DeepSeekCoder MagicCoder StarCoder WizardCoder |
| MBPP | 408 | |
| Synthesis | | |
| 86 | . | 52% |
| 43.36% | 56.86% | 72.30% |
| Reasoning | 82.62% | 79.20% | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09664v2.md",
"file_path": "paper_data/2402.09664v2.md",
"file_size": 54178,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
b357250a-3d2c-4f51-8c12-211016ee8128 | # Subjects
## 6. Llm Evaluation On Der
|
| Reasoning | 82.62% | 79.20% |
| CRR Improvement cf. IER | | |
| 1 | . | 74% |
| β | | |
| 7 | . | 88% | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09664v2.md",
"file_path": "paper_data/2402.09664v2.md",
"file_size": 54178,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
32f6ca9a-ac55-4424-a7e5-fb1fa9afdfff | # Subjects
## 6. Llm Evaluation On Der
|
| 7 | . | 88% |
| β | | |
| 11 | . | 89% |
| β | | |
| 1 | . | 13 | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09664v2.md",
"file_path": "paper_data/2402.09664v2.md",
"file_size": 54178,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
978c2f3a-cd81-4b11-bd08-0eef90d8d00a | # Subjects
## 6. Llm Evaluation On Der
| | |
| 1 | . | 13% |
| β | | |
| 5 | . | 15% |
| β | | |
| 9 | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09664v2.md",
"file_path": "paper_data/2402.09664v2.md",
"file_size": 54178,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
da096085-d51f-4057-93f6-3712f12db1eb | # Subjects
## 6. Llm Evaluation On Der
|
| β | | |
| 9 | . | 54% |
| β | | |
| 13 | . | 20% |
| β | | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09664v2.md",
"file_path": "paper_data/2402.09664v2.md",
"file_size": 54178,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
14a178a5-3bb5-451c-9d38-078a50d979be | # Subjects
## 6. Llm Evaluation On Der
|
| β | | |
| 2 | . | 11% |
| β | | |
| HumanEval | 162 | |
| Synthesis 87.65% | 69.75% | 52.47% | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09664v2.md",
"file_path": "paper_data/2402.09664v2.md",
"file_size": 54178,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
0ca747c4-1aa6-4d80-993d-1485e21987e5 | # Subjects
## 6. Llm Evaluation On Der
162 | |
| Synthesis 87.65% | 69.75% | 52.47% |
| Reasoning | 80.28% | 74.63% |
| CRR Improvement cf. IER | | |
| 1 | . | 27% |
| β | | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09664v2.md",
"file_path": "paper_data/2402.09664v2.md",
"file_size": 54178,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
7dd1dda0-ec2b-4bc8-9eff-15f029d5f99e | # Subjects
## 6. Llm Evaluation On Der
| 27% |
| β | | |
| 10 | . | 70% |
| β | | |
| 1 | . | 4% |
| β | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09664v2.md",
"file_path": "paper_data/2402.09664v2.md",
"file_size": 54178,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
74679ae8-dd31-4f04-bbfd-171d762359d3 | # Subjects
## 6. Llm Evaluation On Der
| 1 | . | 4% |
| β | | |
| 9 | . | 61% |
| β | | |
| 12 | . | 57% | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09664v2.md",
"file_path": "paper_data/2402.09664v2.md",
"file_size": 54178,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
89cd3ef2-355e-46b0-9444-2691d2094993 | # Subjects
## 6. Llm Evaluation On Der
|
| 12 | . | 57% |
| β | | |
| 1 | . | 02% |
| β | | |
| 20 | . | 08% | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09664v2.md",
"file_path": "paper_data/2402.09664v2.md",
"file_size": 54178,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
519a0e31-1e5f-4393-9e17-ba87b6b09752 | # Subjects
## 6. Llm Evaluation On Der
| |
| 20 | . | 08% |
| β | | |
| 19 | . | 38% |
| β | | |
since these datasets are not designed for code synthesis and lack proper program specifications. Also, we could not reproduce the code synthesis results | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09664v2.md",
"file_path": "paper_data/2402.09664v2.md",
"file_size": 54178,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
8e22e48b-4068-442b-a15b-b955669faff8 | # Subjects
## 6. Llm Evaluation On Der
| | |
since these datasets are not designed for code synthesis and lack proper program specifications. Also, we could not reproduce the code synthesis results of Llama 2 and excluded that from subject LLMs. Similar to IER experiments, we set the temperature to zero 4 to account for the non-determinism and reproducibility of the results.
Table 2 shows the results of this experiment. GPT models still outperform open-source models on the DER task, with a margin of 17.97 (GPT-4) and 13.13 (GPT-3.5) from the best open-source model. Compared to IER, the gap between GPT models and open-source models has been reduced.
We can also observe that the models achieve 6.84% higher CRR on average in the DER task (except CodeLlama on HumanEval), compared to IER. Before concluding the models are more competent in the execution reasoning when evaluated on the programs they correctly synthesize, we compared the programs in this experiment with those in IER experiment. If true, lower complexity might be the root cause of higher CRR on the DER task. Figure 4 shows the CC distribution of the programs in MBPP and HumanEval, compared to that generated by subject LLMs. We can observe that the synthesized code, if not more complex, is no less than the ground-truth programs in these datasets. Consequently, we confirm that models reason better on a code they correctly synthesize. However, there is still a considerable gap between the code synthesis and reasoning abilities of the LLM, specifically on open-source models. Given that code synthesis and reasoning are unified in DER, we first computed the Spearman's ROC between the rank of models based on numbers in the *Synthesis* row and Reasoning row for each dataset. The results show a strong positive correlation on MBPP (Ο = 0.85), but a negligible correlation on HumanE | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09664v2.md",
"file_path": "paper_data/2402.09664v2.md",
"file_size": 54178,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
7a9641a0-5957-48a4-adbf-9c408baa41e5 | # Subjects
## 6. Llm Evaluation On Der
more complex, is no less than the ground-truth programs in these datasets. Consequently, we confirm that models reason better on a code they correctly synthesize. However, there is still a considerable gap between the code synthesis and reasoning abilities of the LLM, specifically on open-source models. Given that code synthesis and reasoning are unified in DER, we first computed the Spearman's ROC between the rank of models based on numbers in the *Synthesis* row and Reasoning row for each dataset. The results show a strong positive correlation on MBPP (Ο = 0.85), but a negligible correlation on HumanEval (Ο = 0.17). These results communicate a strong message: the ranking of LLMs based on their code synthesis abilities (pass@k) could be significantly different than their reasoning abilities on the same code. This necessitates a framework such as CodeMind that promotes other
4As a result of this design decision, our synthesis results might be different from existing leaderboards.
evaluation aspects of LLMs for code. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09664v2.md",
"file_path": "paper_data/2402.09664v2.md",
"file_size": 54178,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
4faed666-17b6-4e80-ab35-57d9dcc2d88a | # Subjects
## 7. Evaluation On Sr
Specification Reasoning (SR) offers a novel perspective in understanding the code synthesis process of LLMs, particularly in how they leverage input-output specifications. To evaluate the abilities of LLMs for SR, we prompt LLMs for code synthesis under the following three settings:
(1) Natural language specification with one ground-truth input-output. Under this setting, we randomly select one of the existing tests and add that to the specification. We validate the synthesized code using only this test.
(2) Natural language specification with no input-output. We remove the test added to the specification in the previous setting and re-prompt LLMs for code synthesis. We validate the synthesized code using only the test from the previous setting. Intuitively, if including test data can help LLMs in code synthesis, we observe a drop in LLMs' performance.
(3) Natural language specification with misleading inputoutput. We mutate the expected output of the test from the first setting and add it to the specification. We validate the synthesized code using the original test. The mutation changes the expected output to a value that does not align with the specification. For example, if the expected out-
| Dataset | Setting |
|------------------------|-----------|
| General LLMs | Code LLMs |
| GPT-4 | GPT-3.5 |
| With Test | 90.69% |
| MBPP | No Test |
| β | |
| 78.87% | |
| β | |
| 48.28% | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09664v2.md",
"file_path": "paper_data/2402.09664v2.md",
"file_size": 54178,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
acf4b875-224d-4fa0-8a22-44e771ac8795 | # Subjects
## 7. Evaluation On Sr
No Test |
| β | |
| 78.87% | |
| β | |
| 48.28% | |
| β | |
| 53.68% | |
| β | |
| 67.65% | |
| β | |
| 69.61% | |
| β | |
| 41.67% | |
| β | |
| 52.21% | |
| β | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09664v2.md",
"file_path": "paper_data/2402.09664v2.md",
"file_size": 54178,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
45875fb8-218d-408e-a7eb-f53215867778 | # Subjects
## 7. Evaluation On Sr
| |
| 41.67% | |
| β | |
| 52.21% | |
| β | |
| Misleading Test 68.14% | |
| β | |
| 74.02% | |
| β | |
| 50.74% | |
| β | |
| 59.07% | |
| β | |
| 68.63% | |
| β | |
| 67.40% | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09664v2.md",
"file_path": "paper_data/2402.09664v2.md",
"file_size": 54178,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
e0269676-b92d-47e4-a9a0-08280d622b46 | # Subjects
## 7. Evaluation On Sr
|
| β | |
| 68.63% | |
| β | |
| 67.40% | |
| β | |
| 40.20% | |
| β | |
| 58.09% | |
| β | |
| With Test | 91.98% |
| HumanEval | No Test |
| β | |
| 70.37% | |
| β | |
| 54.32% | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09664v2.md",
"file_path": "paper_data/2402.09664v2.md",
"file_size": 54178,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
86dedcc4-9301-41c3-9abb-f9d923446ae4 | # Subjects
## 7. Evaluation On Sr
| |
| 70.37% | |
| β | |
| 54.32% | |
| β | |
| 65.43% | |
| β | |
| 82.10% | |
| β | |
| 80.86% | |
| β | |
| 38.89% | |
| β | |
| 76.54% | |
| Misleading Test 83.95% | |
| β | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09664v2.md",
"file_path": "paper_data/2402.09664v2.md",
"file_size": 54178,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
c82fb3ae-c119-4816-bae2-f45a76139564 | # Subjects
## 7. Evaluation On Sr
89% | |
| β | |
| 76.54% | |
| Misleading Test 83.95% | |
| β | |
| 65.43% | |
| β | |
| 53.70% | |
| β | |
| 61.73% | |
| β | |
| 79.63% | |
| β | |
| 74.69% | |
| β | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09664v2.md",
"file_path": "paper_data/2402.09664v2.md",
"file_size": 54178,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
80f4a01d-2f6f-4337-b44f-4b2613dd4ed0 | # Subjects
## 7. Evaluation On Sr
| |
| β | |
| 74.69% | |
| β | |
| 27.04% | |
| β | |
| 66.05% | |
| β | |
put is True, mutation changes it to False. Similarly, if the expected output is a positive integer, we mutate it to a negative one with a large difference. Intuitively, due to the divergence with natural language specification, misleading input-output should further drop LLMs' performance.
We followed a similar setup for this experiment as the one in Β§6; performed this experiment only on MBPP and HumanEval programs. We also pre-processed the prompts from HumanEval, which initially contained input-output samples.
The results in Table 3 show that the performance of LLMs in code synthesis is, on average, 7.36% higher with test data included in the specification. Introducing deceptive tests in the specification detrimentally affects the LLMs' performance in code synthesis compared to a legitimate test
(10% performance drop on average). However, compared to No Test cases, the performance drop across all the models and programs is only 2.65% on average. Regardless, these results showcase the ability of LLMs to reason and utilize the test data in the specification. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09664v2.md",
"file_path": "paper_data/2402.09664v2.md",
"file_size": 54178,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
05f46dca-918e-4c4c-9946-dca3fe00a32f | # Subjects
## 8. In-Depth Analysis Of Results
We further analyzed the IER results, which evaluate the general ability of LLMs in code reasoning. In the first step, we wanted to see if LLMs understand different code constructs.
Without knowing the logic of each code construct, reasoning about code execution is impossible. To that end, we tagged each of 5395 programs based on code constructs used in their implementation with the following labels: For, **While**, If, Try, Switch, Nested Loop, **Nested If**, and **Basic**. A program tagged with a Basic label has no special code construct. Next, we clustered the programs per tag and computed the CRR of LLMs for each cluster. Figure 5 shows the results of this analysis for the top five best-performing LLMs. We can observe that models handle conditional statements better than recursion, except for Try-Catch or Try-Except statements. Furthermore, when it comes to nested constructs, the CRR values notably drop. Impact of Loop Properties. Given that models struggle the most with recurring constructs, we focused the programs with For, While, and Nested Loop tags at the next step. We hypothesize this struggle is due to the loop's length or determining the loop's length. The former questions whether it is harder for the model to track the program's data flow as loops get longer.
The latter questions the abilities of models to reason about how many times a code block should be repeated, regardless of how long the length would be. Figure 6 plots the distribution of correct versus incorrect cases and CRR values per each loop length in Java programs. Sub-figure labels show the ROC coefficients between the loop length and CRR. We can observe a moderate to strong negative correlation between the loop length and CRR of the models, i.e., CRR decreases as the loop length increases. By manually investigating the incorrect IER cases, we also noticed that LLMs mostly failed to reason about loop conditions correctly. Without understanding loop conditions and understanding the number of iterations, it is impossible to reason about the execution correctly. Furthermore, we found cases where, although the model could reason about loop conditions and the number of iterations, it lost track of data flow in the loop, thereby predicting output incorrectly. In the code snippet below (p03059 from CodeNet (Java)), the loop condition depends on a constant variable (c) and variable (time), whose value changes inside the loop.
Input: 3 5 7, Expected Output:10
| {
"creation_datetime": "2024-03-04",
"file_name": "2402.09664v2.md",
"file_path": "paper_data/2402.09664v2.md",
"file_size": 54178,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
6ece299b-b014-4836-8f33-ced446a46e26 | # Subjects
## 8. In-Depth Analysis Of Results
ER cases, we also noticed that LLMs mostly failed to reason about loop conditions correctly. Without understanding loop conditions and understanding the number of iterations, it is impossible to reason about the execution correctly. Furthermore, we found cases where, although the model could reason about loop conditions and the number of iterations, it lost track of data flow in the loop, thereby predicting output incorrectly. In the code snippet below (p03059 from CodeNet (Java)), the loop condition depends on a constant variable (c) and variable (time), whose value changes inside the loop.
Input: 3 5 7, Expected Output:10
int a, b, c = sc.nextInt();
int amount = 0, time = a;
for(int i = 2; time <= c + 0.5; i++){
amount += b;
time = a * i;
}
println(amount);
All models failed in IER of this relatively simple program.
Looking at the CoT analysis, GPT-3.5 correctly understands the loop condition and can reason about all the variable values in the first and second iterations. However, it stays in the loop for one more. StarCoder cannot correctly reason about loop condition and iterates the code block six times. MagicCoder CoT only explains the code (correct explanation)
without communicating how it reasons about variables.
GPT-3.5, Predicted Output:15
Analysis: Enters for loop and iterates from 2 onwards
until the value of time exceeds c + 0.5. In first
iteration, amount becomes 5 and time becomes 6. In
second iteration, amount becomes 10 and time becomes
9. In third iteration, amount becomes 15 and time
becomes 12. 12 is greater than 7+0.5, loop exits.
StarCoder, Predicted Output:105
Analysis: It enters a for loop and it iterates from 2
to 7 to check each integer i in this range. In each
iteration, the value of i is passed to variable time.
The condition is met when i is 2,3,4,5,6 and 7.
MagicCoder, Predicted Output:15
Enters for loop and iterates from 2 to infinity. Inside
the loop, it adds the value of b to variable amount.
Then it calculates new value of time | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09664v2.md",
"file_path": "paper_data/2402.09664v2.md",
"file_size": 54178,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
13b951de-4a10-48e0-8f7a-d0f9034129ec | # Subjects
## 8. In-Depth Analysis Of Results
.5, loop exits.
StarCoder, Predicted Output:105
Analysis: It enters a for loop and it iterates from 2
to 7 to check each integer i in this range. In each
iteration, the value of i is passed to variable time.
The condition is met when i is 2,3,4,5,6 and 7.
MagicCoder, Predicted Output:15
Enters for loop and iterates from 2 to infinity. Inside
the loop, it adds the value of b to variable amount.
Then it calculates new value of time by multiplying a
with current iteration i. The loop continues until time
is greater than c+0.5. After the loop finishes, print
the final value of amount as output, which is 15.
The loop condition is simple in the code snippet below
(p02994 from CodeNet (Java)). Consequently, most models
correctly reasoned about the number of iterations.
Input: 5 2, Expected Output: 18
int n = scanner.nextInt();
int l = scanner.nextInt();
List<Integer> diff = new ArrayList<Integer>();
for(int i = 1; i < n+1; i++) {
int value = l+i-1;
if(value < 0) {value = value * -1;}
diff.add(value);
}
removeK = diff.indexOf(Collections.min(diff)) + 1;
println(n*l + n*(n+1)/2 - n - (l+ removeK -1));
Still, some LLMs struggled to reason about code execution
inside the loop correctly. CodeLlama can correctly reason
about the number of loop iterations but fails to calculate
the values in the diff array, resulting in a wrong output.
MagicCoder can also correctly reason about loop length, and
similar to the previous example, correctly explain what the
code does. However, it cannot predict the output correctly
(predicted output is 10), preventing us from judging if it
understands the code as well as it explains it.
CodeLlama, Predicted Output:24
Analysis: In the given inputs n is 5 and l is 2. The
sequence of numbers from 1 to 5 is [1,2, | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09664v2.md",
"file_path": "paper_data/2402.09664v2.md",
"file_size": 54178,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
eb9fe2df-bcea-4260-b0e1-5be1bc8f185f | # Subjects
## 8. In-Depth Analysis Of Results
number of loop iterations but fails to calculate
the values in the diff array, resulting in a wrong output.
MagicCoder can also correctly reason about loop length, and
similar to the previous example, correctly explain what the
code does. However, it cannot predict the output correctly
(predicted output is 10), preventing us from judging if it
understands the code as well as it explains it.
CodeLlama, Predicted Output:24
Analysis: In the given inputs n is 5 and l is 2. The
sequence of numbers from 1 to 5 is [1,2,3,4,5]. The
differences between l and each number in the sequence
are [1,0,1,2,3]. The smallest difference is 0 at index
1. So, the index 1 is removed from the sequence. The
new sequence becomes [1,3,4,5]. The sum of the original
sequence [1,2,3,4,5] and the new sequence [1,3,4,5] is
calculated as (5*2+5*(5+1)/2-5-(2+1-1))=15+15-5-2=24.
Impact of if-else Properties. Our manual investigations re-
veal that LLMs struggle with conditional constructs for two
reasons: (1) confusion about branches and (2) wrong reason-
ing about conditional predicates. Concerning the former, we
observed that LLMs sometimes struggle to understand the
meaning of conditional statements. For example, regardless
of which branch to take, they always go into both "if" and
"else" branches to reason about execution. However, such
cases rarely happen, and in most cases, LLMs struggle to
analyze the conditional predicates correctly, take the wrong
branch, and predict the output incorrectly.
Impact of Math and Logic Operators. Understanding the
meaning of math and logic operators is essential to reason-
ing how inputs evolve into output through execution. The
observations from previous analyses also underscore their importance to reason about the program control flow. Our manual investigation of failed IER cases shows the prevalence of arithmetic and logic operators in programs, with the former dominating the number of the latter. LLMs struggled more to reason about bitwise operators (e.g., >>> , οΏ½ | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09664v2.md",
"file_path": "paper_data/2402.09664v2.md",
"file_size": 54178,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
d394525f-b73d-408a-a724-a210b715eaa9 | # Subjects
## 8. In-Depth Analysis Of Results
to
analyze the conditional predicates correctly, take the wrong
branch, and predict the output incorrectly.
Impact of Math and Logic Operators. Understanding the
meaning of math and logic operators is essential to reason-
ing how inputs evolve into output through execution. The
observations from previous analyses also underscore their importance to reason about the program control flow. Our manual investigation of failed IER cases shows the prevalence of arithmetic and logic operators in programs, with the former dominating the number of the latter. LLMs struggled more to reason about bitwise operators (e.g., >>> , β§ ,
>> , << , | , and & ) compared to arithmetic (e.g., +
and - ) and comparison operators (e.g., > , < ).
The code snippet below is from MBPP (task 311) and contains several arithmetic and logic operators. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09664v2.md",
"file_path": "paper_data/2402.09664v2.md",
"file_size": 54178,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
b3e01bc1-b65d-4e13-83f2-7989d4fcf723 | # Subjects
## Input: 10, **Expected Output:**14 Def Set_Left_Most_Unset_Bit(N):
if not (n & (n + 1)): return n pos, temp, count = 0, n, 0 while temp:
if not (temp & 1): pos = count count += 1 temp>>=1
return (n | (1 << (pos)))
n. If it is not, the code calculates the position of the leftmost unset bit. The code then returns the result of n | (1 << (pos)).
StarCoder, while failing to predict the output, correctly understands the meaning of most operators. For example, it can reason that the bitwise AND of n and n+1 is used to determine whether integer n is a power of 2. It can also reason that the loop condition checks if the least significant bit of n is set to 0. However, it seemingly cannot understand the meaning of the return statement (setting the least significant bit to 1), hence failing to predict the correct output.
Impact of Output Types. We categorized programs based on the output types and checked (1) if LLMs were able to correctly predict the type of output (Type Match) and (2) if they could correctly reason about the values of output
(Value Match). We identified seven types in subject programs, namely Int (e.g., 2), Decimal(e.g., 2.34), String
(e.g., "CodeMind"), Binary (e.g., True or False), List (e.g., [1,3,4,7]), and Tuple (Python-specific, e.g., (2,7)).
Figure 7 shows the details of these results. In summary, LLMs achieve a high Type Match (> 80%), although they struggled to predict the correct value (Value Match). Among different types, it is harder for the models to predict the values of outputs with Tuple/List and Decimal types.
Tuples and Lists consist of multiple items and every single one of them may change during the program execution. As a result, it is unsurprising that models struggle to track the flow of inputs through potentially different execution paths and reason about a complex output as a whole. Additionally, given that manipulation of such types involves API calls, e.g., min(), next(), charAt(), understanding changes requires additional efforts by LLMs. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09664v2.md",
"file_path": "paper_data/2402.09664v2.md",
"file_size": 54178,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
1a32dd69-08b0-43aa-9e34-5b3f725b870c | # Subjects
## 9. Concluding Remarks
In this paper, we discussed the necessity of code reasoning tasks as an alternative to evaluate LLMs for programming tasks. We introduced CodeMind, a framework that supports several code reasoning tasks, and used CodeMind in a large-scale grounded theory study to evaluate state-of-theart LLMs for code reasoning. Our results demonstrate that LLMs, in general, understand code constructs and are capable of reasoning about program specification and following how inputs evolve to output through execution. However, their ability is limited as the code becomes more complex, i.e., has more complex control- or data flow, contains nonprimitive types, and invokes API calls. We also observe that specification reasoning, which is essential to generate a code from a given program specification, does not mean models can also reason about code execution. We are considering two future directions based on this work. First, we plan to add more code reasoning tasks to CodeMind, e.g., variable reasoning and code optimization reasoning. Furthermore, we want to augment CodeMind with a benchmark that can challenge LLMs' code reasoning to a greater extent compared to the existing benchmarks. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09664v2.md",
"file_path": "paper_data/2402.09664v2.md",
"file_size": 54178,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
dd0d643e-fc41-4810-be13-ad87df98c41f | # Efficiency At Scale: Investigating The Performance Of Diminutive Language Models In Clinical Tasks
Niall Taylor*a, Upamanyu Ghose*a,b, Omid Rohanianc,e, Mohammadmahdi Nouriborjid,e, Andrey Kormilitzina, David A. Cliftonc,f, Alejo Nevado-Holgadoa
aDepartment of Psychiatry, University of Oxford, Oxford, United Kingdom
bCentre for Artificial Intelligence in Precision Medicines, University of Oxford and King Abdulaziz
University,
cDepartment of Engineering Science, University of Oxford, Oxford, United Kingdom
dSharif University of Technology, Tehran, Iran
eNLPie Research, Oxford, United Kingdom
fOxford-Suzhou Centre for Advanced Research, Suzhou, China | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10597v1.md",
"file_path": "paper_data/2402.10597v1.md",
"file_size": 62573,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
7a578a36-11db-416c-ad6b-d7e3fc866b1d | # Efficiency At Scale: Investigating The Performance Of Diminutive Language Models In Clinical Tasks
## Abstract
The entry of large language models (LLMs) into research and commercial spaces has led to a trend of ever-larger models, with initial promises of generalisability, followed by a widespread desire to downsize and create specialised models without the need for complete fine-tuning, using Parameter Efficient Fine-tuning (PEFT) methods. We present an investigation into the suitability of different PEFT methods to clinical decision-making tasks, across a range of model sizes, including extremely small models with as few as
25 million parameters.
Our analysis shows that the performance of most PEFT approaches varies significantly from one task to another, with the exception of LoRA, which maintains relatively high performance across all model sizes and tasks, typically approaching or matching full fine-tuned performance. The effectiveness of PEFT methods in the clinical domain is evident, particularly for specialised models which can operate on low-cost, in-house computing infrastructure. The advantages of these models, in terms of speed and reduced training costs, dramatically outweighs any performance gain from large foundation LLMs. Furthermore, we highlight how domain-specific pre-training interacts with PEFT methods and model size, and discuss how these factors interplay to provide the best efficiency-performance trade-off. Full code available at: tbd. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10597v1.md",
"file_path": "paper_data/2402.10597v1.md",
"file_size": 62573,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
e58f77c0-fbe4-4b6a-89d8-2ac121baded1 | # Efficiency At Scale: Investigating The Performance Of Diminutive Language Models In Clinical Tasks
## 1. Introduction
The Natural Language Processing (NLP) research space is now dominated by Large Language Models (LLMs), with a steady influx of different so-called foundation models from major AI companies every few months. The vast majority of recent LLMs are designed for *generative* tasks and chat-style interactions, reliant on a mixture of autoregressive LM pre-training with follow-up reinforcement learning from human feedback (RLHF) to create the likes of ChatGPT [1, 2]. However, the performance of these generative LLMs on classic NLP tasks such as sequence classification, relation extraction, named entity recognition, and embedding similarity search, especially in the clinical domain remains lacklustre [3, 4, 5, 6, 7, 8]. In many such cases, much smaller, BERT-style LLMs trained with masked language modelling (BERT, RoBERTa) continue to be competitive, or even surpass the performance of their larger counterparts [9, 8]. Moreover, achieving high performance with general domain LLMs on specialised clinical texts requires further adaptation through either extended pre-training on clinical data or fine-tuning for specific tasks.
1.1. Scales of LLM
Recent LLM research has predominantly focused on exceptionally large models from the more prolific AI companies, including ChatGPT from OpenAI [1] and Llama [2] from Meta. Although recent models from OpenAI are proprietary, it is widely recognised that the size of foundation models spans a broad range, from about 3 to
175 billion parameters, and with GPT-4 potentially more than one trillion parameters.
In contrast, there exist smaller, earlier-generation LLMs like RoBERTa-base, which contains approximately 125 million parameters. The relative cost, simplicity, and reusability of these variously scaled models are crucial aspects to consider, and we aim to provide a holistic analysis of the interplay between different efficiency metrics and model size.
1.2. Fine-tuning and PEFT
Even smaller LLMs are relatively compute-intensive when compared to simpler machine learning alternatives, such as TF-IDF or Bag-of-Words paired with random forest classifiers. Moreover, adapting very large LLMs to new tasks can become unfeasible in low-resource settings where GPUs are scarce or non-existent. Common approaches to reduce model size include: knowledge distillation [10, 11], architecture compression [12], and pruning [13]. These approaches | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10597v1.md",
"file_path": "paper_data/2402.10597v1.md",
"file_size": 62573,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
80957d18-9db2-4303-8d9c-27e12c83dd54 | # Efficiency At Scale: Investigating The Performance Of Diminutive Language Models In Clinical Tasks
## 1. Introduction
we aim to provide a holistic analysis of the interplay between different efficiency metrics and model size.
1.2. Fine-tuning and PEFT
Even smaller LLMs are relatively compute-intensive when compared to simpler machine learning alternatives, such as TF-IDF or Bag-of-Words paired with random forest classifiers. Moreover, adapting very large LLMs to new tasks can become unfeasible in low-resource settings where GPUs are scarce or non-existent. Common approaches to reduce model size include: knowledge distillation [10, 11], architecture compression [12], and pruning [13]. These approaches generally aim to maintain a high level of performance in compressed models by harnessing the knowledge from the much larger *teacher* LLMs. Whilst these approaches have had great success in producing smaller LLMs, adapting to new tasks still requires full fine-tuning of all model parameters to achieve optimal performance. This may necessitate a plethora of domain or task-specific LLMs, which cannot be used interchangeably due to catastrophic forgetting.[14]. A more prevalent approach today is to adapt the fine-tuning approach itself. Traditional approaches to adapting LLMs to downstream tasks involve introducing task specific neural network layers (often referred to as heads) to provide the extra flexibility required to complete a task, such as sequence classification. This training occurs in a supervised manner, involving updates to all model parameters, including task-specific ones (full fine-tuning). Full fine-tuning of smaller LLMs, such as BERT- base [15] with merely 108 million parameters has been feasible with modern GPUs, requiring only a single GPU with full precision. However, with the advent of models like Llama-2 [2] with 65 billion parameters, the practicality of fine-tuning these models on low-end hardware dwindles.
Several strategies exist to address this issue, one approach being the reduction of model size in terms of floating-point precision, bits, and the physical memory needed to store the weights through quantisation. This enables full fine-tuning of moderately sized models. [16]. Pruning model parameters to reduce the *redundant* weights for given downstream tasks has also been effective in certain cases [13]. Another approach is to avoid full fine-tuning altogether, opting instead for zero-shot task adaption through prompting (prompt engineering), or by reducing the number of trainable parameters necessary for fine-tuning the LLM for its new task | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10597v1.md",
"file_path": "paper_data/2402.10597v1.md",
"file_size": 62573,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
317b5fbf-e712-462f-89a6-2b8e9101aee3 | # Efficiency At Scale: Investigating The Performance Of Diminutive Language Models In Clinical Tasks
## 1. Introduction
address this issue, one approach being the reduction of model size in terms of floating-point precision, bits, and the physical memory needed to store the weights through quantisation. This enables full fine-tuning of moderately sized models. [16]. Pruning model parameters to reduce the *redundant* weights for given downstream tasks has also been effective in certain cases [13]. Another approach is to avoid full fine-tuning altogether, opting instead for zero-shot task adaption through prompting (prompt engineering), or by reducing the number of trainable parameters necessary for fine-tuning the LLM for its new task, a process known as Parameter Efficient Fine-tuning (PEFT). Notable PEFT methods include: Prompt tuning [17], Prefix tuning [18], Low Rank Adaptation (LoRA) [19], and Inhibit Activations (IA3) [20]. These PEFT methods have become popular across various NLP tasks, and in this work, we will explore the utility of a select few for differently sized LLMs in the clinical domain. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10597v1.md",
"file_path": "paper_data/2402.10597v1.md",
"file_size": 62573,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
d9311a5b-f942-403d-b45a-64e0123d69e3 | # Efficiency At Scale: Investigating The Performance Of Diminutive Language Models In Clinical Tasks
## 1.3. Clinical Domain - Llm Adaptation
Unstructured clinical notes form a large portion of Electronic Health Records (EHRs)
and can offer a substantial amount of clinically salient information given appropriate representation, such as that given by a LLM. Foundation LLMs are typically developed and trained for broad-stroke, general-purpose set of applications: trained on open, webbased text data and intended to be applied to *similar* open, web-based text data. When taking foundation LLMs and applying to biomedical and clinical texts, performance often drops significantly [21, 22, 3, 9, 4, 5, 6, 7, 23]. Achieving state-of-the-art (SoTA) performance in the clinical domain still involves training generic LLMs on biomedical or clinical domain data, and PEFT methods can provide efficient ways to adapt open LLMs to the clinical domain. The clinical domain is also inherently a compute-limited environment, with sensitive data which typically cannot be sent to third-party APIs.
Thus, small, efficient LLMs that can perform specific tasks well and potentially run on edge devices are highly sought after [24, 23].
1.4. Related work Recent efforts have extensively explored the use of PEFT methods for large-scale models, aiming to align them with new domains or tasks [16, 25, 19]. However, despite the use of quantisation and PEFT methods, high-end GPUs are still required and taking these models to production in any real-time setting becomes non-trivial in terms of cost and time. One group has recently investigated PEFT for clinical tasks with Llama models, and our work follows a very similar path [26]. However, our emphasis is on the efficiency of these methods and how applicable they are to much smaller LLMs.
Our key contributions are:
- Comparison of recent PEFT methods to clinical decision tasks
- The suitability of PEFT methods for small LLMS (Mobile and TinyBert architectures)
- The suitability of PEFT methods to knowledge distilled LLMs (DistilBERT) - Exploring the interaction of pre-training domain, sample size and PEFT methods
| Model architecture | # Params (mil) | GPU (VRAM GB |
|----------------------|------------------|----------------|
| ) | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10597v1.md",
"file_path": "paper_data/2402.10597v1.md",
"file_size": 62573,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
c6709684-c63f-4e26-b911-e67edb0e96e6 | # Efficiency At Scale: Investigating The Performance Of Diminutive Language Models In Clinical Tasks
## 1.3. Clinical Domain - Llm Adaptation
clinical decision tasks
- The suitability of PEFT methods for small LLMS (Mobile and TinyBert architectures)
- The suitability of PEFT methods to knowledge distilled LLMs (DistilBERT) - Exploring the interaction of pre-training domain, sample size and PEFT methods
| Model architecture | # Params (mil) | GPU (VRAM GB |
|----------------------|------------------|----------------|
| ) | | |
| FLOPs | | |
| Tiny-BERT | 13.87 | 0.052 |
| 3 | . | 66 |
| Γ | | |
| 10 | | |
| 7 | | |
| Mobile-BERT | 24.58 | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10597v1.md",
"file_path": "paper_data/2402.10597v1.md",
"file_size": 62573,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
947fbfc5-778f-4fc1-aa55-1e0c3e32800e | # Efficiency At Scale: Investigating The Performance Of Diminutive Language Models In Clinical Tasks
## 1.3. Clinical Domain - Llm Adaptation
| | |
| 7 | | |
| Mobile-BERT | 24.58 | 0.092 |
| 1 | . | 62 |
| Γ | | |
| 10 | | |
| 8 | | |
| Distil-BERT | 65.78 | 0.245 |
| 3 | . | 41 |
| Γ | | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10597v1.md",
"file_path": "paper_data/2402.10597v1.md",
"file_size": 62573,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
b71b7487-bc90-4d5a-8463-f3c04d7c2658 | # Efficiency At Scale: Investigating The Performance Of Diminutive Language Models In Clinical Tasks
## 1.3. Clinical Domain - Llm Adaptation
| 0.245 |
| 3 | . | 41 |
| Γ | | |
| 10 | | |
| 8 | | |
| BERT | 108.31 | 0.403 |
| 6 | . | 81 |
| Γ | | |
| 10 | | |
| 8 | | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10597v1.md",
"file_path": "paper_data/2402.10597v1.md",
"file_size": 62573,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
6266d3a6-fcde-4774-93ab-0ecc03e517ee | # Efficiency At Scale: Investigating The Performance Of Diminutive Language Models In Clinical Tasks
## 1.3. Clinical Domain - Llm Adaptation
| |
| 10 | | |
| 8 | | |
| Llama2-7b | 6607.34 | 24.6 |
| 5 | . | 18 |
| Γ | | |
| 10 | | |
| 10 | | |
| Llama2-7b (bfloat16) | 6607.34 | 12.37 |
| 5 | . | 18 |
| Γ | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10597v1.md",
"file_path": "paper_data/2402.10597v1.md",
"file_size": 62573,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
44942bb3-b54b-41cd-8bf1-2a5c60b36121 | # Efficiency At Scale: Investigating The Performance Of Diminutive Language Models In Clinical Tasks
## 1.3. Clinical Domain - Llm Adaptation
| |
| Llama2-7b (bfloat16) | 6607.34 | 12.37 |
| 5 | . | 18 |
| Γ | | |
| 10 | | |
| 10 | | | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10597v1.md",
"file_path": "paper_data/2402.10597v1.md",
"file_size": 62573,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
2db8bfbe-0308-4ae9-8116-930a9ae5b2d3 | # Efficiency At Scale: Investigating The Performance Of Diminutive Language Models In Clinical Tasks
## 2. Methods
2.1. Model architectures We evaluate the performance of PEFT across various transformer-based LLM architectures of differing sizes, including: TinyBERT [27], MobileBERT [12], DistilBERT [11] , standard BERT [15], and Llama-2-7b [2]. A table of relevant architecture details is provided in Table 1.
2.2. Domain pre-training In addition to exploring various transformer-based LLM architectures of different sizes, we examine three domain variants for each:
- **General:** Original, unadapted models. - **Biomedical:** Models pre-trained or distilled with biomedical literature [28] - **Clinical:** Models pre-trained with clinical EHR data [24]
This framework allows us to investigate the interplay between domain pre-training, model size, and the chosen PEFT methods.
2.3. Downstream fine-tuning We opt to compare performance using a traditional fine-tuning setup, whereby each LLM is adapted with a task-specific head to perform the respective downstream task. For each task, we will utilise additional linear layers on top of the base LLM, with a task-specific loss that is used to update all model parameters (the base LLM and the additional task head). This approach remains the most suitable across all model architectures and aligns with previous research [29, 24].
2.4. PEFT
Parameter Efficient Fine-tuning (PEFT) methods are numerous, but they typically fall into two categories: introducing new trainable parameters or selectively freezing existing ones. For our experiments, we focus on the following methods. In addition to the trainable parameters specific to each method described below, the task-specific parameters in the classification head are also trained.
Low-Rank Adaptation of Large Language Models. Low-Rank Adaptation of LLMs or LoRA [19] is a reparameterisation technique that works by injecting two trainable matrices (A and B) that act as an approximation of a singular value decomposition
(SVD) of the weight update βW for any weight matrix W β RdΓk in the LLM. The approximation works as βW = BA, where B β RdΓr, A β RrΓk and r βͺ min(*d, k*)
is the rank of the LoRA matrices, | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10597v1.md",
"file_path": "paper_data/2402.10597v1.md",
"file_size": 62573,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
b43d7153-327e-42c9-9429-91096d243944 | # Efficiency At Scale: Investigating The Performance Of Diminutive Language Models In Clinical Tasks
## 2. Methods
-Rank Adaptation of Large Language Models. Low-Rank Adaptation of LLMs or LoRA [19] is a reparameterisation technique that works by injecting two trainable matrices (A and B) that act as an approximation of a singular value decomposition
(SVD) of the weight update βW for any weight matrix W β RdΓk in the LLM. The approximation works as βW = BA, where B β RdΓr, A β RrΓk and r βͺ min(*d, k*)
is the rank of the LoRA matrices, which is a tunable parameter. The new forward pass is updated to h = (W + βW)x = (W + AB)x = Wx + *ABx*. While it is possible to introduce the LoRA matrices in any layer of the LLM, it is common practice to introduce them as weight update approximations for the key, query and value matrices.
The underlying assumption is that the weight updates in LLMs intrinsically have a lower rank than their dimensions, and thus can be well approximated by their SVD.
Additionally, once fully trained, the LoRA matrices can be integrated into the model as W*updated* = W0 + BA, thereby introducing no inference latency. With LoRA the original weight matrices of the LLM remain frozen during the fine-tuning phase. IA3. Infused Adapter by Inhibiting and Amplifying Inner Activation (IA3) shares similarities with other adapter methods that introduce new parameters to scale activations using learned vectors [20]. While these learnable vectors can be applied to any set of activations, applying them to the keys and values in the relevant attention mechanism and the intermediate activation of the position-wise feed-forward networks was found to be both efficient and sufficient. For a transformer based architecture, we have a key K β Rdk and value V β Rdv, and the hidden dimensions of the position-wise feed-forward network is dff. IA3 introduces learnable vectors lk β Rdk, lv β Rdv and lff β Rdff and modifies the attention and feed-forward calculation as follows:
οΏ½ (lv β V ) (1) softmax οΏ½Q(lk β K) βdk (lff β Ξ³(W1x | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10597v1.md",
"file_path": "paper_data/2402.10597v1.md",
"file_size": 62573,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
0fe4f519-9b87-41e6-a7b9-2f21086f8e54 | # Efficiency At Scale: Investigating The Performance Of Diminutive Language Models In Clinical Tasks
## 2. Methods
found to be both efficient and sufficient. For a transformer based architecture, we have a key K β Rdk and value V β Rdv, and the hidden dimensions of the position-wise feed-forward network is dff. IA3 introduces learnable vectors lk β Rdk, lv β Rdv and lff β Rdff and modifies the attention and feed-forward calculation as follows:
οΏ½ (lv β V ) (1) softmax οΏ½Q(lk β K) βdk (lff β Ξ³(W1x))W2 (2) where $\odot$ represents the element-wise product, and $\gamma$, $W_{1}$ and $W_{2}$ are the activation function and weight matrices of the feed-forward network. Similar to LoRa, the learnable vectors can be merged into the model as $l\odot W$ because any operation $l\odot Wx$ is equivalent to $(l\odot W)x$. Hence, this method does not introduce any inference latency either. Once again, with $IA^{3}$ the original weight matrices of the LLM remain frozen during fine-tuning.
Based on previous works and some preliminary experiments, we opt to focus on LoRa and $IA^{3}$ for our main experiments, which generally demonstrate significantly better performance compared to alternative PeFT methods. Moreover, aligning prefix tuning and prompt learning with NER tasks is not straightforward and we believed it offered limited value to adapt these methods for NER specifically (for a comparison of other PeFT methods, see previous work[26]).
2.5. Few-Shot training A prevalent challenge in real-world scenarios is the scarcity of training samples, especially in the clinical domain where certain diseases are inherently rare and generating gold-standard annotations demands clinical expertise and considerable time, both of which are limited resources. Therefore, the ability to train a viable model with few training samples is another angle of efficiency we explore. This is achieved by supplying only a limited number of training samples per class to a specific model. We carry out a series of experiments with an escalating number of samples per class to determine the effect of different model sizes and PEFT methods. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10597v1.md",
"file_path": "paper_data/2402.10597v1.md",
"file_size": 62573,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
1c3f4c69-e80b-48c2-aeae-570d8c744591 | # Efficiency At Scale: Investigating The Performance Of Diminutive Language Models In Clinical Tasks
## 2.6. Datasets And Tasks
We utilise a number of commonly used clinical datasets for downstream evaluation, focusing on the following tasks: named entity recognition (NER), sequence classification and relation extraction (RE), in line with earlier clinical NLP research [30, 31]. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10597v1.md",
"file_path": "paper_data/2402.10597v1.md",
"file_size": 62573,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
ef493968-4b06-4756-9ea6-d2758897d0c5 | # Efficiency At Scale: Investigating The Performance Of Diminutive Language Models In Clinical Tasks
## 2.6.1. Sequence Classification Tasks
MIMIC-III ICD-9 Triage. A common task with the MIMIC-III dataset [32] involves classifying patient records according to their medical diagnoses, which are coded using a system known as ICD-9. We utilise a simplified version of this task, where the top 20 most commonly occurring ICD-9 codes are categorised into seven triage groups: [Cardiology, Obstetrics, Respiratory, Neurology, Oncology, AcuteMedicine, Gastroenterology]. This grouping was developed in collaboration with clinicians. For further information, please refer to the original paper [29]. MIMIC-III - Clinical Outcomes. Two clinical outcome tasks associated with the MIMIC- III dataset [32] are Mortality Prediction (MP) and Length of Stay (LoS) prediction [33]. MP involves analysing discharge summaries from the ICU to assess a patient's mortality risk, constituting a binary classification problem. The LoS task also uses ICU discharge summaries to forecast the duration of a patient's hospital stay, with durations binned into four classes: under 3 days, 3 to 7 days, 1 week to 2 weeks, and more than 2 weeks. I2B2 2010 Relation Extraction. We used several curated datasets from the I2B2 series, including the 2010 medical relation extraction dataset [34] which aims to classify text based on the apparent medical relationship being described, with the following derived labels:
1. Treatment improves medical problem (TrIP) 2. Treatment worsens medical problem (TrWP) 3. Treatment causes medical problem (TrCP) 4. Treatment is administered for medical problem (TrAP) 5. Treatment is not administered because of medical problem (TrNAP) 6. Test reveals medical problem (TeRP) 7. Test conducted to investigate medical problem (TeCP)
| Dataset | Task Type | # labels | # train samples | # eval samples |
|------------------------|-------------|------------|-------------------|------------------|
| MIMIC-III MP | Seq. CLS | 2 | 33,954 | 9,822 | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10597v1.md",
"file_path": "paper_data/2402.10597v1.md",
"file_size": 62573,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
da8c833a-53a6-453d-bbc3-8ba4a776b077 | # Efficiency At Scale: Investigating The Performance Of Diminutive Language Models In Clinical Tasks
## 2.6.1. Sequence Classification Tasks
investigate medical problem (TeCP)
| Dataset | Task Type | # labels | # train samples | # eval samples |
|------------------------|-------------|------------|-------------------|------------------|
| MIMIC-III MP | Seq. CLS | 2 | 33,954 | 9,822 |
| MIMIC-III LoS | Seq. CLS | 3 | 30,421 | 8,797 |
| MIMIC-III ICD-9 Triage | Seq. CLS | 7 | 9,559 | 3,172 |
| I2B2 2010 RE | Seq. CLS | 9 | 22,256 | 43,000 |
| I2B2 2010 | NER | 7 | 6726 | 27,626 |
| I2B2 2012 | NER | 13 | 6797 | 5,664 |
| I2B2 2014 | NER | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10597v1.md",
"file_path": "paper_data/2402.10597v1.md",
"file_size": 62573,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
e0a6462c-6ab0-4079-ae8d-0ea73fc6c196 | # Efficiency At Scale: Investigating The Performance Of Diminutive Language Models In Clinical Tasks
## 2.6.1. Sequence Classification Tasks
| 27,626 |
| I2B2 2012 | NER | 13 | 6797 | 5,664 |
| I2B2 2014 | NER | 42 | 45974 | 32,586 | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10597v1.md",
"file_path": "paper_data/2402.10597v1.md",
"file_size": 62573,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
7de10bdf-d237-4760-ab0f-eeae18cf8c2e | # Efficiency At Scale: Investigating The Performance Of Diminutive Language Models In Clinical Tasks
## 8. Medical Problem Indicates Medical Problem (Pip) 9. No Relations
We follow the same pre-processing procedure outlined in previous works [24]. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10597v1.md",
"file_path": "paper_data/2402.10597v1.md",
"file_size": 62573,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
96f93d6f-9c56-4b84-a7a8-dbc44be7bf97 | # Efficiency At Scale: Investigating The Performance Of Diminutive Language Models In Clinical Tasks
## 2.6.2. Named Entity Recognition
I2B2 - 2010 and 2012. These two NER tasks involve classifying text spans related to temporal relations [34, 35] within discharge summaries, as delineated by expert annotations. The classification is based on four primary categories: clinical concepts, clinical departments, evidentials, and occurences. These categories are further broken down into more specific entities: medical problem (PR), medical treatment (TR), medical test (TE), clinical department (CD), evidential (EV), *occurence (OC)*, and *none (NO)*. I2B2 - 2014. A deidentification task, whereby spans of text within clinical notes are classified using different protected health information (PHI) such as name, address, and postcode [36].
For further dataset and task details, see Appendix A. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10597v1.md",
"file_path": "paper_data/2402.10597v1.md",
"file_size": 62573,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
f1d92e77-73c5-450d-82fa-08f8794f496e | # Efficiency At Scale: Investigating The Performance Of Diminutive Language Models In Clinical Tasks
## 3. Results
3.1. Model size vs PEFT
The number of trainable parameters is an important factor in determining the efficiency in model performance and has a strong correlation with cost and time of training.
We detail the performance metrics for various PEFT methods applied to each model type across different clinical tasks. In Table 3, we present the results for sequence classification and NER across different PEFT methods and model sizes.
The results demonstrate that LoRA consistently outperforms other PEFT methods across all models and tasks, often approaching the performance of full fine-tuning.
We also present a comparison of the number of trainable parameters as a function of the different PEFT methods in Fig 1. There is a clear correlation between the number of trainable parameters and performance, and LoRA appears to provide larger models an advantage over fully fine-tuned smaller models.
| Model name | PEFT | ICD9-Triage | i2b2-2010-RE | MIMIC-LoS | Mimic-MP |
|------------------------------------------|---------------|---------------|----------------|---------------|---------------|
| BioBERT | Full | 0.864 (0.002) | 0.935 (0.004) | 0.709 (0.002) | 0.819 (0.020) |
| IA3 | 0.703 (0.19) | 0.896 (0.004) | 0.634 (0.001) | 0.769 (0.005) | |
| LORA | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10597v1.md",
"file_path": "paper_data/2402.10597v1.md",
"file_size": 62573,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
038ab86b-58f7-4f43-bdfd-70daa35168ce | # Efficiency At Scale: Investigating The Performance Of Diminutive Language Models In Clinical Tasks
## 3. Results
| 0.703 (0.19) | 0.896 (0.004) | 0.634 (0.001) | 0.769 (0.005) | |
| LORA | 0.827 | | | | |
| (0.002) | | | | | |
| 0.925 | | | | | |
| (0.001) | | | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10597v1.md",
"file_path": "paper_data/2402.10597v1.md",
"file_size": 62573,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
15542ba3-41c6-4834-947a-01a9c4cdf3b2 | # Efficiency At Scale: Investigating The Performance Of Diminutive Language Models In Clinical Tasks
## 3. Results
| | | |
| (0.001) | | | | | |
| 0.697 | | | | | |
| (0.002) | | | | | |
| 0.828 | | | | | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10597v1.md",
"file_path": "paper_data/2402.10597v1.md",
"file_size": 62573,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
845150d9-cbf5-4c94-9f80-7352e3be4f3b | # Efficiency At Scale: Investigating The Performance Of Diminutive Language Models In Clinical Tasks
## 3. Results
| |
| 0.828 | | | | | |
| (0.002) | | | | | |
| BioDistilBERT | Full | 0.862 (0.010) | 0.927 (0.003) | 0.706 (0.003) | 0.825 (0.006) |
| IA3 | 0.792 (0.008) | 0.906 (0.002) | 0.677 (0) | 0.797 (0.001) | |
| LORA | 0.855 | | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10597v1.md",
"file_path": "paper_data/2402.10597v1.md",
"file_size": 62573,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
72152844-a2b0-4a85-98e6-9bc76bb6664b | # Efficiency At Scale: Investigating The Performance Of Diminutive Language Models In Clinical Tasks
## 3. Results
792 (0.008) | 0.906 (0.002) | 0.677 (0) | 0.797 (0.001) | |
| LORA | 0.855 | | | | |
| (0.005) | | | | | |
| 0.928 | | | | | |
| (0.003) | | | | | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10597v1.md",
"file_path": "paper_data/2402.10597v1.md",
"file_size": 62573,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
d5be8268-31aa-4b24-b356-1892e875758a | # Efficiency At Scale: Investigating The Performance Of Diminutive Language Models In Clinical Tasks
## 3. Results
| |
| (0.003) | | | | | |
| 0.702 | | | | | |
| (0.001) | | | | | |
| 0.825 | | | | | |
| (0.001) | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10597v1.md",
"file_path": "paper_data/2402.10597v1.md",
"file_size": 62573,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
3ff3e31a-b0c5-482f-9d3f-d8642b528586 | # Efficiency At Scale: Investigating The Performance Of Diminutive Language Models In Clinical Tasks
## 3. Results
| | | | | |
| (0.001) | | | | | |
| BioMobileBERT | Full | 0.851 (0.004) | 0.932 (0.003) | 0.704 (0.004) | 0.819 (0.011) |
| IA3 | 0.744 (0.012) | 0.897 (0.003) | 0.639 (0.001) | 0.774 (0.002) | |
| LORA | 0.808 | | | | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10597v1.md",
"file_path": "paper_data/2402.10597v1.md",
"file_size": 62573,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
fc3704cd-1b57-487b-af3d-7cb4fff116b3 | # Efficiency At Scale: Investigating The Performance Of Diminutive Language Models In Clinical Tasks
## 3. Results
002) | |
| LORA | 0.808 | | | | |
| (0.004) | | | | | |
| 0.918 | | | | | |
| (0.002) | | | | | |
| 0.671 | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10597v1.md",
"file_path": "paper_data/2402.10597v1.md",
"file_size": 62573,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
6858eef9-42f1-4f1f-aa63-b4af20877af1 | # Efficiency At Scale: Investigating The Performance Of Diminutive Language Models In Clinical Tasks
## 3. Results
| | | | | |
| 0.671 | | | | | |
| (0.004) | | | | | |
| 0.798 | | | | | |
| (0.002) | | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10597v1.md",
"file_path": "paper_data/2402.10597v1.md",
"file_size": 62573,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
41acb35d-3efc-4edb-b4d4-be8ce10233e3 | # Efficiency At Scale: Investigating The Performance Of Diminutive Language Models In Clinical Tasks
## 3. Results
| | | | |
| (0.002) | | | | | |
| TinyBioBERT | Full | 0.727 (0.012) | 0.910 (0.005) | 0.684 (0.001) | 0.802 (0.001) |
| IA3 | 0.390 (0.035) | 0.852 (0.002) | 0.588 (0.003) | 0.607 (0.003) | |
| LORA | 0.599 | | | | |
| (0.008) | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10597v1.md",
"file_path": "paper_data/2402.10597v1.md",
"file_size": 62573,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
1fa559c0-11cf-4902-835c-806f5e8c004a | # Efficiency At Scale: Investigating The Performance Of Diminutive Language Models In Clinical Tasks
## 3. Results
| 0.599 | | | | |
| (0.008) | | | | | |
| 0.895 | | | | | |
| (0.003) | | | | | |
| 0.649 | | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10597v1.md",
"file_path": "paper_data/2402.10597v1.md",
"file_size": 62573,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
b367b11a-45b8-4df3-9ca3-7ced996d6613 | # Efficiency At Scale: Investigating The Performance Of Diminutive Language Models In Clinical Tasks
## 3. Results
| | | | |
| 0.649 | | | | | |
| (0.006) | | | | | |
| 0.764 | | | | | |
| (0.003) | | | | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10597v1.md",
"file_path": "paper_data/2402.10597v1.md",
"file_size": 62573,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
f16f00cc-ae37-4ec3-982d-80a785f3d58a | # Efficiency At Scale: Investigating The Performance Of Diminutive Language Models In Clinical Tasks
## 3. Results
| | |
| (0.003) | | | | | |
| (a) Sequence classification task results | | | | | |
| Model name | PEFT | i2b2-2010-NER | i2b2-2012-NER | i2b2-2014-NER | |
| BioBERT | Full | 0.819 (0.003) | 0.824 (0.001) | 0.967 (0.001) | |
| IA3 | 0.473 (0.002) | 0.485 (0.006) | 0.850 (0.001) | | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10597v1.md",
"file_path": "paper_data/2402.10597v1.md",
"file_size": 62573,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
0c8f8698-6c48-4528-8ae1-9bbe439d7f7c | # Efficiency At Scale: Investigating The Performance Of Diminutive Language Models In Clinical Tasks
## 3. Results
0.824 (0.001) | 0.967 (0.001) | |
| IA3 | 0.473 (0.002) | 0.485 (0.006) | 0.850 (0.001) | | |
| LORA | | | | | |
| 0.696 | | | | | |
| (0.003) | | | | | |
| 0.753 | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10597v1.md",
"file_path": "paper_data/2402.10597v1.md",
"file_size": 62573,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
328ca6f5-6955-4a35-a083-9078db18f652 | # Efficiency At Scale: Investigating The Performance Of Diminutive Language Models In Clinical Tasks
## 3. Results
| | | | | |
| 0.753 | | | | | |
| (0.001) | | | | | |
| 0.935 | | | | | |
| (0) | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10597v1.md",
"file_path": "paper_data/2402.10597v1.md",
"file_size": 62573,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
7c0ccb32-ca44-4162-856c-cd1a84d349d9 | # Efficiency At Scale: Investigating The Performance Of Diminutive Language Models In Clinical Tasks
## 3. Results
| | | | |
| (0) | | | | | |
| BioDistilBERT | Full | 0.803 (0.003) | 0.795 (0.006) | 0.967 (0.001) | |
| IA3 | 0.498 (0.003) | 0.503 (0.001) | 0.883 (0) | | |
| LORA | | | | | |
| 0.718 | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10597v1.md",
"file_path": "paper_data/2402.10597v1.md",
"file_size": 62573,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
f8a2004a-3d43-472c-9703-e21d51826536 | # Efficiency At Scale: Investigating The Performance Of Diminutive Language Models In Clinical Tasks
## 3. Results
| LORA | | | | | |
| 0.718 | | | | | |
| (0.008) | | | | | |
| 0.729 | | | | | |
| (0.006) | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10597v1.md",
"file_path": "paper_data/2402.10597v1.md",
"file_size": 62573,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
830ef2f4-3dad-45b9-af2a-f49a5a1c9401 | # Efficiency At Scale: Investigating The Performance Of Diminutive Language Models In Clinical Tasks
## 3. Results
| | | | | |
| (0.006) | | | | | |
| 0.940 | | | | | |
| (0.001) | | | | | |
| BioMobileBERT | Full | 0.796 (0.003) | 0.772 (0.006) | 0.966 (0) | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10597v1.md",
"file_path": "paper_data/2402.10597v1.md",
"file_size": 62573,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
35677ab1-fe68-4e91-851a-34bbc008c110 | # Efficiency At Scale: Investigating The Performance Of Diminutive Language Models In Clinical Tasks
## 3. Results
| | | |
| BioMobileBERT | Full | 0.796 (0.003) | 0.772 (0.006) | 0.966 (0) | |
| IA3 | 0.515 (0.003) | 0.515 (0.003) | 0.908 (0) | | |
| LORA | | | | | |
| 0.638 | | | | | |
| (0.010) | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10597v1.md",
"file_path": "paper_data/2402.10597v1.md",
"file_size": 62573,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
d8110cff-02b7-45d2-a138-8d6381c0c9f3 | # Efficiency At Scale: Investigating The Performance Of Diminutive Language Models In Clinical Tasks
## 3. Results
| | | | | |
| (0.010) | | | | | |
| 0.650 | | | | | |
| (0.004) | | | | | |
| 0.941 | | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10597v1.md",
"file_path": "paper_data/2402.10597v1.md",
"file_size": 62573,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
0b0982cb-d94e-4df5-98e7-e6a68d646c28 | # Efficiency At Scale: Investigating The Performance Of Diminutive Language Models In Clinical Tasks
## 3. Results
| | | | |
| 0.941 | | | | | |
| (0.001) | | | | | |
| TinyBioBERT | Full | 0.655 (0.004) | 0.705 (0.008) | 0.906 (0.003) | |
| IA3 | 0.328 (0.009) | 0.381 (0.003) | 0.715 (0.002) | | |
| LORA | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10597v1.md",
"file_path": "paper_data/2402.10597v1.md",
"file_size": 62573,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
0348fe7e-8ccf-4ff6-863e-56123df9fbca | # Efficiency At Scale: Investigating The Performance Of Diminutive Language Models In Clinical Tasks
## 3. Results
|
| IA3 | 0.328 (0.009) | 0.381 (0.003) | 0.715 (0.002) | | |
| LORA | | | | | |
| 0.438 | | | | | |
| (0.007) | | | | | |
| 0.561 | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10597v1.md",
"file_path": "paper_data/2402.10597v1.md",
"file_size": 62573,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
8f73f8da-a640-4487-be8b-8d49df95ad8f | # Efficiency At Scale: Investigating The Performance Of Diminutive Language Models In Clinical Tasks
## 3. Results
| | | | |
| 0.561 | | | | | |
| (0.009) | | | | | |
| 0.8051 | | | | | |
| (0.013) | | | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10597v1.md",
"file_path": "paper_data/2402.10597v1.md",
"file_size": 62573,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
d1216360-1b4f-4e8d-aeaa-11f28d2b3da7 | # Efficiency At Scale: Investigating The Performance Of Diminutive Language Models In Clinical Tasks
## 3. Results
| | |
| (0.013) | | | | | |
| (b) NER task results | | | | | |
tasks. Macro-averaged Receiver Operating Characteristic area under the curve (*ROCAUC*) is used for
MIMIC-LoS and MP tasks, while macro-averaged F1 scores are reported for the ICD-9 triage task. Bold
results indicate best PEFT performance, and values underlined are top performance across all fine-tuning methods.
3.2. Differential effect of LoRA rank according to model size Given the superior performance of LoRA over other PEFT methods, as evidenced in Figure 1, we aimed to methodically evaluate the impact of the LoRA rank hyperparameter across models of varying sizes. For this purpose, we employed the Optuna package [37] to conduct 20 trials of hyperparameter optimisation, holding the LoRA
rank constant at r β 8, 16, 32, 64, 128. The hyperparameters adjusted during tuning included LoRA dropout (d β 0.1, 0.3, 0.5), LoRA alpha (Ξ± β 0.3, 0.5, 1.0), and learning rate (lr β [10β5, 10β3]). | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10597v1.md",
"file_path": "paper_data/2402.10597v1.md",
"file_size": 62573,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
ee2eab2b-2ce3-4f6a-aa11-10308fe8654b | # Efficiency At Scale: Investigating The Performance Of Diminutive Language Models In Clinical Tasks
## 3. Results
to methodically evaluate the impact of the LoRA rank hyperparameter across models of varying sizes. For this purpose, we employed the Optuna package [37] to conduct 20 trials of hyperparameter optimisation, holding the LoRA
rank constant at r β 8, 16, 32, 64, 128. The hyperparameters adjusted during tuning included LoRA dropout (d β 0.1, 0.3, 0.5), LoRA alpha (Ξ± β 0.3, 0.5, 1.0), and learning rate (lr β [10β5, 10β3]). The Llama model was excluded from this experiment due to its significantly larger size compared to BERT-based models, which would have imposed an excessive computational load for hyperparameter tuning. Following the hyperparameter search, we selected the optimal performing model for each r value to analyse its effect on models with differing parameter counts (Appendix B.5).
Increasing the rank r in TinyBioBERT led to improved performance up to r = 64, after which a slight decline was observed at r = 128. A similar pattern was noted in BioDistilBERT, with the turning point at r = 32. The impact of rank on BioMobile- BERT was more variable, with a noticeable performance dip only at r = 64. This variability might be attributed to the distinct architecture of BioMobileBERT compared to other BERT-based models [12]. For BioBERT, the larger model in the BERT family, there was a modest improvement at r = 16, but performance tended to decrease at higher ranks. Conversely, for the RoBERTa model, performance enhancements were seen at ranks r = 32 and r = 128, yet no clear pattern between rank and performance emerged. Despite these fluctuations, the overall impact on model performance was relatively minor, with the greatest increase in AUROC being 0.0125 and the largest decrease being 0.0078. Hence, even for models with varying number of parameters, the default LoRA rank of 8 is a good trade-off between computational time taken to tune the models and performance. However, if the task at hand would practically benefit from a small increase in the performance metric, tuning the LoRA parameters may be beneficial.
3.3. General vs biomedical vs clinical domain pre-training Another aspect of efficiency with regards to LLM downstream adaptation is the domain in which the model was pre-trained. We have | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10597v1.md",
"file_path": "paper_data/2402.10597v1.md",
"file_size": 62573,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
fec4c9ca-06b5-4e37-ba67-6325c606ef11 | # Efficiency At Scale: Investigating The Performance Of Diminutive Language Models In Clinical Tasks
## 3. Results
, with the greatest increase in AUROC being 0.0125 and the largest decrease being 0.0078. Hence, even for models with varying number of parameters, the default LoRA rank of 8 is a good trade-off between computational time taken to tune the models and performance. However, if the task at hand would practically benefit from a small increase in the performance metric, tuning the LoRA parameters may be beneficial.
3.3. General vs biomedical vs clinical domain pre-training Another aspect of efficiency with regards to LLM downstream adaptation is the domain in which the model was pre-trained. We have conducted direct comparisons between models pre-trained in general, biomedical, and clinical domains across our various model architectures. For the sake of brevity, we focus solely on the i2b2-
2010 relation extraction task. The performance differences are greatest in the smaller models, with clinically pre-trained models generally performing best with a 1-4 percent improvement based on model size. For results across all tasks and their dependence on domain pre-training, please see Appendix C.6.
3.4. Budget The primary advantage of employing PEFT methods lies in their ability to reduce training times, lower GPU memory demands, minimise storage requirements, and enhance model reusability (all of which lower financial burden). In our study, we examined the trade-offs among these aspects for various model architectures, focusing on the most effective PEFT method identified in our experiments, namely, LoRA. For each defined budget, we used MIMIC mortality prediction as the benchmark task and macro-averaged AUROC as the metric of evaluation. In addition to training the LoRA
versions of each model, we also conducted full fine-tuning on each model to determine whether any budget level could achieve efficiency improvements comparable to those provided by PEFT approaches. The only exception was the Llama model, which was exclusively trained with LoRA due to computational constraints.
3.4.1. Time A key measure of efficiency is the training time and the speed at which different models converge within a constrained period, particularly a relatively short one. We set an initial time limit of 2, 000 seconds (33 minutes) for all models. To evaluate the performance of the models that seemed to show an increasing trend in performance after the budget of 2, 000 seconds (Figure 3), we raised the budget to 6, 000 seconds
(100 minutes). An exception was made for the Llama model | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10597v1.md",
"file_path": "paper_data/2402.10597v1.md",
"file_size": 62573,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
015e92f2-b77c-460c-a526-b12d74f66bc6 | # Efficiency At Scale: Investigating The Performance Of Diminutive Language Models In Clinical Tasks
## 3. Results
only exception was the Llama model, which was exclusively trained with LoRA due to computational constraints.
3.4.1. Time A key measure of efficiency is the training time and the speed at which different models converge within a constrained period, particularly a relatively short one. We set an initial time limit of 2, 000 seconds (33 minutes) for all models. To evaluate the performance of the models that seemed to show an increasing trend in performance after the budget of 2, 000 seconds (Figure 3), we raised the budget to 6, 000 seconds
(100 minutes). An exception was made for the Llama model, which remained undertrained even after 6, 000 seconds, necessitating an extension of the training period to approximately 21, 500 seconds (6 hours) to attain optimal performance.
We observed that the fully fine-tuned version of the models, regardless of size, was quicker to converge than the LoRA versions, followed by eventually overfitting. The LoRA versions of the models eventually converged to the performance (or close to the performance) of the fully fine-tuned models. This observation suggests that fully fine-tuning a model on a small time budget could theoretically obtain an efficiency gain similar to the PEFT methods. However, from a practical standpoint, the LoRA version of all models converged to similar performance within βΌ1 hour of training (Figure
3) while being more memory efficient. A more detailed analysis of the difference in efficiency between the methods is discussed in section 3.4.4 It is also important to acknowledge that larger models, such as Llama, deliver superior performance but incur significantly higher time and memory costs.
3.4.2. Few-shot Training Another focus for efficient training involves restricting the number of training samples, reflecting real-world situations with especially rare outcomes or cases where producing labels is challenging. We explored sample budgets that ranged from 8 to
4096 samples, increasing incrementally by a factor of 2.
As expected, we observed a direct relationship between sample budget and model performance, regardless of the model type and training method used. While we noticed the fully fine-tuned models generally performing better than their LoRA counterparts for smaller sample budgets, the difference became negligible for higher budget values
(Figure 3). The fully fine-tuned models on a budget of 4096 samples under performed when compared against the LoRA versions on all samples. Hence, for sample budget to be considered as an effective method for efficiency | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10597v1.md",
"file_path": "paper_data/2402.10597v1.md",
"file_size": 62573,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
7b56bad4-dc44-497e-abd5-607f36c8827b | # Efficiency At Scale: Investigating The Performance Of Diminutive Language Models In Clinical Tasks
## 3. Results
. We explored sample budgets that ranged from 8 to
4096 samples, increasing incrementally by a factor of 2.
As expected, we observed a direct relationship between sample budget and model performance, regardless of the model type and training method used. While we noticed the fully fine-tuned models generally performing better than their LoRA counterparts for smaller sample budgets, the difference became negligible for higher budget values
(Figure 3). The fully fine-tuned models on a budget of 4096 samples under performed when compared against the LoRA versions on all samples. Hence, for sample budget to be considered as an effective method for efficiency gain, we would need more than
4096 samples.
3.4.3. Holistic efficiency In an attempt to establish a unified metric of efficiency, we took the average of the following normalised metrics: time taken to reach peak performance T, number of trainable parameters P and total model parameters S:
$$\frac{T+P+S}{3}\tag{3}$$
For ease of interpretability, we scaled the final efficiency value to range between 0
and 1, where 0 represents the least efficient model and 1 represents the most efficient.
We show the relationship between efficiency and performance in Figure 4 1.
The holistic efficiency shows a general negative correlation between efficiency and performance, however the gap in performance is relatively minor compared to the difference in efficiency between models.
3.4.4. Memory and cost The GPU and storage requirements for training differ massively between model types, and fine-tuning method. Whilst performance has generally increased with model size, there is a trade-off between performance and compute required, as well as speed of training and inference. We provided the model size and memory requirements in Table 1 and we extend this analysis by calculating the estimated costs of training and storage of the differently sized models in Table 4. As observed in previous results, larger models like Llama-2-7b achieve higher performance on most tasks but at 20 and 94 times the monetary value of models like BioBERT and TinyBioBERT, respectively. If the objective is to fine-tune a model for multiple tasks, BioBERT and similar models can be a good trade-off between monetary cost and performance.
| Model name | PEFT Method | Train time (hr) | Inference time (hr) | Total cost (GB | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10597v1.md",
"file_path": "paper_data/2402.10597v1.md",
"file_size": 62573,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
d03891e8-8e5a-4d4d-bf4e-5b9060deb56a | # Efficiency At Scale: Investigating The Performance Of Diminutive Language Models In Clinical Tasks
## 3. Results
storage of the differently sized models in Table 4. As observed in previous results, larger models like Llama-2-7b achieve higher performance on most tasks but at 20 and 94 times the monetary value of models like BioBERT and TinyBioBERT, respectively. If the objective is to fine-tune a model for multiple tasks, BioBERT and similar models can be a good trade-off between monetary cost and performance.
| Model name | PEFT Method | Train time (hr) | Inference time (hr) | Total cost (GBP) |
|---------------|---------------|-------------------|-----------------------|--------------------|
| Llama-2-7b | LORA | 51.07 | 4.06 | 112.22 |
| BioBERT | Full | 2.51 | 0.22 | 5.56 |
| BioBERT | LORA | 2.16 | 0.22 | 4.84 |
| BioMobileBERT | Full | 1.57 | 0.14 | 3.48 |
| BioMobileBERT | LORA | 1.35 | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10597v1.md",
"file_path": "paper_data/2402.10597v1.md",
"file_size": 62573,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
f1cd4db9-c77d-4d33-b313-e8a07afe15b0 | # Efficiency At Scale: Investigating The Performance Of Diminutive Language Models In Clinical Tasks
## 3. Results
| BioMobileBERT | Full | 1.57 | 0.14 | 3.48 |
| BioMobileBERT | LORA | 1.35 | 0.14 | 3.03 |
| BioDistilBERT | Full | 1.35 | 0.12 | 2.99 |
| BioDistilBERT | LORA | 1.21 | 0.13 | 2.73 |
| TinyBioBERT | Full | 0.53 | 0.06 | 1.2 |
| TinyBioBERT | LORA | 0.46 | 0.06 | 1.06 | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10597v1.md",
"file_path": "paper_data/2402.10597v1.md",
"file_size": 62573,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
370273e9-a44b-4e9b-bb9e-c952fe45a9c3 | # Efficiency At Scale: Investigating The Performance Of Diminutive Language Models In Clinical Tasks
## 4. Discussion
4.1. PEFT with small LLMs We have explored the use of different-sized LLMs for various clinical downstream tasks, assessing both traditional fine-tuning and different PEFT methods. From the methods we studied (IA3 and *LoRA*), we found LoRA to be superior across all tasks, leading us to select it as the preferred PEFT method for all subsequent analysis. Whilst full fine-tuning generally outperforms LoRA, in certain models and tasks the performance is at least matched or even surpassed and that LoRA works well for all model sizes. This finding highlights the potential in utilising PEFT methods with very small LLMs. The relative performance gap between full fine-tuning and LoRA appears to increase with the smaller models, which was only partially mitigated by increasing the LoRA rank.
4.2. Comparison of LLM size The performance of various model sizes was evaluated on a specific task within a fixed time frame, including the 7 billion parameter Llama-2 model. This comparison revealed significant differences in the learning capabilities of models of varying sizes. Numerous smaller LLMs completed 5 epochs of training well before the Llama-2 Llama- 2 model achieved comparable performance levels. Nevertheless, when given sufficient time, Llama-2 did reach the highest evaluation performance by a few percentage points in the target task. Llama-2 model is approximately 500 times the size of the TinyBERT models, indicating that the computational demand, even with the implementation of LoRA for Llama-2, is significantly higher. The duration required for the Llama-2 model to achieve comparable performance on downstream tasks, using the same GPU, was considerable. It took roughly ten times longer to match the performance of smaller LLMs and exceeded six hours of training to attain its peak performance.
4.3. Holistic efficiency According to our composite efficiency metric, the medium sized LLMs are substantially more computationally efficient compared to the largest model for the given task, whilst only exhibiting a minor drop in performance. It is difficult to derive a true representation of holistic efficiency as this would likely require taking cost and time of pre-training, and other facets not known, but we believe this provides a reasonable overview of the interplay between model size and fine-tuning methods. Further profiling would be needed to quantify exact runtime improvements. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10597v1.md",
"file_path": "paper_data/2402.10597v1.md",
"file_size": 62573,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
86a56fa0-810e-4511-a1e7-94efac3bc9dd | # Efficiency At Scale: Investigating The Performance Of Diminutive Language Models In Clinical Tasks
## 4.4. Domain Pre-Training
The pre-training of LLMs proved quite important in the performance on the various clinical domain tasks, with biomedical and clinical LLMs generally outperforming their general counterparts. We do note that the *clinical* LLMs, such as ClinicalBioBERT have been trained on MIMIC-III notes themselves and this does give them an unfair advantage. However, the potential for data leakage in the Llama-2 model is difficult to ascertain. In line with previous works [22], it could be argued that developing specialised clinical LLMs through pre-training on relevant clinical language remains optimal for subsequent downstream task adaptation.
4.5. Limitations and future work The selection of PEFT methods investigated in this study reflected the state of the field at the time; however, we acknowledge that this is an evolving research area, and we cannot be certain that other methods would not have outperformed those presented here. Indeed, since conducting these experiments, the PEFT library[38] has introduced several new methods worth exploring.
When comparing various model sizes, we chose to limit training to a single GPU.
This approach might disadvantage larger models, particularly the Llama-2 model, which was forced to employ reduction in bit-precision to allow any training. Furthermore , this constraint hindered our ability to thoroughly investigate Llama-2 across all tasks and conduct any hyperparameter optimisation. Future work could seek to explore this further, although the resources required are extensive and arguably yield diminishing returns.
4.6. Conclusion Overall, we believe this work highlights the power of PEFT methods for small LLMs and demonstrates how domain pre-training can be leveraged to create efficient clinical models. While the capabilities of much larger LLMs are evident, they come with significantly higher time and financial demands. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10597v1.md",
"file_path": "paper_data/2402.10597v1.md",
"file_size": 62573,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
c7a68343-7ef6-49e9-b8d2-5d93c1ce7722 | # Efficiency At Scale: Investigating The Performance Of Diminutive Language Models In Clinical Tasks
## Funding
NT was supported by the EPSRC Center for Doctoral Training in Health Data Science (EP/S02428X/1). UG was supported by Alzheimer's Research UK, and the Centre for Artificial Intelligence in Precision Medicines (University of Oxford and King Abdulaziz University). DAC was supported by the Pandemic Sciences Institute at the University of Oxford; the National Institute for Health Research (NIHR) Oxford Biomedical Research Centre (BRC); an NIHR Research Professorship; a Royal Academy of Engineering Research Chair; and the InnoHK Hong Kong Centre for Centre for Cerebro-cardiovascular Engineering (COCHE). | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10597v1.md",
"file_path": "paper_data/2402.10597v1.md",
"file_size": 62573,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
6c32c04b-9f66-4e44-8636-cb711a597d4f | # Efficiency At Scale: Investigating The Performance Of Diminutive Language Models In Clinical Tasks
## Appendix A. Dataset Details
Appendix A.1. MIMIC-III
Mimic-III is a large, freely-available database comprising deidentified health data associated with over 40,000 patients who stayed in critical care units of the Beth Israel Deaconess Medical Center between 2001 and 2012 [32].The data includes demographics, vital signs, laboratory tests, medications, and more collected from a variety of hospital systems. It encompasses over 2 million notes including discharge summaries, radiology reports, and more.
Appendix A.2. i2b2
Originally released on the i2b2 website, but is now hosted via the Department of BioMedical Informatics (DBMI) data portal. The dataset is now referred to as the National NLP Clinical Challenges research datasets (n2c2), and is based on fully deidentified notes from the Research Patient Data Registry at Partners Healthcare System in Boston. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10597v1.md",
"file_path": "paper_data/2402.10597v1.md",
"file_size": 62573,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
b7ee90aa-3d2b-48f0-be7e-b104097566d8 | # Efficiency At Scale: Investigating The Performance Of Diminutive Language Models In Clinical Tasks
## Appendix B. Lora Rank Analysis
We provide a comparison of different LoRA ranks on task performance across each model in Figure B.5. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10597v1.md",
"file_path": "paper_data/2402.10597v1.md",
"file_size": 62573,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
095f170e-3bad-4140-8d54-8ec82a287a2a | # Efficiency At Scale: Investigating The Performance Of Diminutive Language Models In Clinical Tasks
## Appendix C. Hyperparameters And Hardware For Downstream Tasks
For the core experiments we utilised the HuggingFace[39] and Parameter Efficient Finetuning (PEFT)[38] libraries. For consistency and equal footing between model types, all experiments utilised a single NVIDIA RTX 3090 graphics card with 24GB of VRAM. Due to this, however, the experiments utilising Llama-2-7b, even with LoRA, required a reduction in the precision of the model weights from fp32 to bfloat16.
| PEFT | Hyperparameter |
|----------------|----------------------------|
| LoRA | |
| r | 8 |
| alpha | 8 |
| dropout | 0.1 |
| learning rate | |
| 3 | e |
| β | |
| 4 | |
| target modules | [key, value] |
| layers | all | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10597v1.md",
"file_path": "paper_data/2402.10597v1.md",
"file_size": 62573,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
1f0dacd0-125d-44e5-8690-e2fd7491b061 | # Efficiency At Scale: Investigating The Performance Of Diminutive Language Models In Clinical Tasks
## Appendix C. Hyperparameters And Hardware For Downstream Tasks
| β | |
| 4 | |
| target modules | [key, value] |
| layers | all |
| IA | |
| 3 | |
| dropout | 0.1 |
| learning rate | |
| 3 | e |
| β | |
| 4 | |
| target modules | [key, value, feed-forward] |
| layers | all |
| Model name | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10597v1.md",
"file_path": "paper_data/2402.10597v1.md",
"file_size": 62573,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
601a6729-62d9-4e47-843a-eea71e618eb3 | # Efficiency At Scale: Investigating The Performance Of Diminutive Language Models In Clinical Tasks
## Appendix C. Hyperparameters And Hardware For Downstream Tasks
|
| 4 | |
| target modules | [key, value, feed-forward] |
| layers | all |
| Model name | PEFT | ICD9-Triage | i2b2-2010-RE | MIMIC-LoS | Mimic-MP |
|------------------------------------------|--------|---------------|----------------|---------------|------------|
| BERTbase | Full | 0.991 | 0.975 | 0.702 | 0.799 |
| BERTbase | LORA | 0.983 | 0.980 | 0.679 | 0.811 |
| BioBERT | Full | 0.991 | 0.982 | 0.711 | 0.812 |
| BioBERT | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10597v1.md",
"file_path": "paper_data/2402.10597v1.md",
"file_size": 62573,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
027490b4-5f4f-4b0f-b4c0-c7be88d61479 | # Efficiency At Scale: Investigating The Performance Of Diminutive Language Models In Clinical Tasks
## Appendix C. Hyperparameters And Hardware For Downstream Tasks
0.811 |
| BioBERT | Full | 0.991 | 0.982 | 0.711 | 0.812 |
| BioBERT | LORA | 0.991 | 0.985 | 0.697 | 0.828 |
| BioClinicalBERT | Full | 0.993 | 0.978 | 0.697 | 0.793 |
| BioClinicalBERT | LORA | 0.990 | 0.981 | 0.701 | 0.822 |
| BioDistilBERT | Full | 0.992 | 0.979 | 0.697 | 0.803 |
| BioDistilBERT | LORA | 0.993 | 0.988 | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10597v1.md",
"file_path": "paper_data/2402.10597v1.md",
"file_size": 62573,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
43b7ba50-5d84-41be-84b0-45e6a8a8f110 | # Efficiency At Scale: Investigating The Performance Of Diminutive Language Models In Clinical Tasks
## Appendix C. Hyperparameters And Hardware For Downstream Tasks
| Full | 0.992 | 0.979 | 0.697 | 0.803 |
| BioDistilBERT | LORA | 0.993 | 0.988 | 0.704 | 0.822 |
| BioMobileBERT | Full | 0.992 | 0.980 | 0.697 | 0.809 |
| BioMobileBERT | LORA | 0.987 | 0.982 | 0.670 | 0.792 |
| ClinicalDistilBERT | Full | 0.994 | 0.980 | 0.697 | 0.822 |
| ClinicalDistilBERT | LORA | 0.995 | 0.989 | 0.710 | 0.836 |
| ClinicalMobileBERT | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10597v1.md",
"file_path": "paper_data/2402.10597v1.md",
"file_size": 62573,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
d577abde-7e56-4cda-862c-a4627e89e1f8 | # Efficiency At Scale: Investigating The Performance Of Diminutive Language Models In Clinical Tasks
## Appendix C. Hyperparameters And Hardware For Downstream Tasks
.697 | 0.822 |
| ClinicalDistilBERT | LORA | 0.995 | 0.989 | 0.710 | 0.836 |
| ClinicalMobileBERT | Full | 0.995 | 0.983 | 0.720 | 0.826 |
| ClinicalMobileBERT | LORA | 0.994 | 0.982 | 0.690 | 0.824 |
| (a) Sequence classification task results | | | | | |
| Model name | PEFT | i2b2-2010-NER | i2b2-2012-NER | i2b2-2014-NER | |
| BERTbase | Full | 0.806 | 0.792 | 0.974 | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10597v1.md",
"file_path": "paper_data/2402.10597v1.md",
"file_size": 62573,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
0caddd6d-5be8-4c1a-b6df-0c4ffdb687c3 | # Efficiency At Scale: Investigating The Performance Of Diminutive Language Models In Clinical Tasks
## Appendix C. Hyperparameters And Hardware For Downstream Tasks
i2b2-2010-NER | i2b2-2012-NER | i2b2-2014-NER | |
| BERTbase | Full | 0.806 | 0.792 | 0.974 | |
| BERTbase | LORA | 0.673 | 0.697 | 0.951 | |
| BioBERT | Full | 0.822 | 0.823 | 0.969 | |
| BioBERT | LORA | 0.713 | 0.757 | 0.935 | |
| BioClinicalBERT | Full | 0.846 | 0.820 | 0.960 | |
| BioClinicalBERT | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10597v1.md",
"file_path": "paper_data/2402.10597v1.md",
"file_size": 62573,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
4dfe7c98-e2fe-4e1d-a166-ae9c0b1f6d13 | # Efficiency At Scale: Investigating The Performance Of Diminutive Language Models In Clinical Tasks
## Appendix C. Hyperparameters And Hardware For Downstream Tasks
| 0.935 | |
| BioClinicalBERT | Full | 0.846 | 0.820 | 0.960 | |
| BioClinicalBERT | LORA | 0.704 | 0.746 | 0.920 | |
| BioDistilBERT | Full | 0.809 | 0.794 | 0.965 | |
| BioDistilBERT | LORA | 0.704 | 0.726 | 0.939 | |
| BioMobileBERT | Full | 0.794 | 0.774 | 0.966 | |
| BioMobileBERT | LORA | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10597v1.md",
"file_path": "paper_data/2402.10597v1.md",
"file_size": 62573,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
f09ab613-6e11-492f-9931-d98c6140e3b8 | # Efficiency At Scale: Investigating The Performance Of Diminutive Language Models In Clinical Tasks
## Appendix C. Hyperparameters And Hardware For Downstream Tasks
|
| BioMobileBERT | Full | 0.794 | 0.774 | 0.966 | |
| BioMobileBERT | LORA | 0.649 | 0.654 | 0.938 | |
| ClinicalDistilBERT | Full | 0.816 | 0.817 | 0.961 | |
| ClinicalDistilBERT | LORA | 0.671 | 0.740 | 0.920 | |
| (b) NER task results | | | | | | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10597v1.md",
"file_path": "paper_data/2402.10597v1.md",
"file_size": 62573,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
c4c814ab-e7c7-4093-9b78-883ae79a24c1 | # Efficient Language Adaptive Pre-Training: Extending State-Of-The-Art Large Language Models For Polish
Szymon RuciΒ΄nski Apostroph Group - Artificial Intelligence Laboratory ZΓΌrich, Switzerland
{Szymon RuciΒ΄nski}@apostrophgroup.ch | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09759v1.md",
"file_path": "paper_data/2402.09759v1.md",
"file_size": 31612,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |