moiduy04 commited on
Commit
50d2534
1 Parent(s): 51eba52

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -4
README.md CHANGED
@@ -10,7 +10,13 @@ metrics:
10
  Pruned from [`meta-llama/Meta-Llama-3-8B-Instruct`](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
11
  using the Random Pruner from [`LLM-Pruner: On the Structural Pruning of Large Language Models`](https://arxiv.org/abs/2305.11627)
12
 
13
- To replicate,
 
 
 
 
 
 
14
 
15
  1. First, clone the [official implementation](https://github.com/horseee/LLM-Pruner) and run:
16
  ```
@@ -32,9 +38,6 @@ to get the pruned model.
32
  2. Then, to post-train, follow the official implementation, [section 2](https://github.com/horseee/LLM-Pruner?tab=readme-ov-file#2-post-training-recover-stage)
33
 
34
 
35
-
36
-
37
-
38
  # Benchmark Results
39
 
40
  **Benchmark Evaluation**:
 
10
  Pruned from [`meta-llama/Meta-Llama-3-8B-Instruct`](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
11
  using the Random Pruner from [`LLM-Pruner: On the Structural Pruning of Large Language Models`](https://arxiv.org/abs/2305.11627)
12
 
13
+ Done to test viability of LLM-Pruner for task-agnostic, low resource Generative AI for Commercial and Personal Use
14
+ compared to using out-of-the-box models like [`meta-llama/Llama-3.2-3B-Instruct`](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct)
15
+
16
+ Our presentation slides may be found [here](https://drive.google.com/file/d/1SUGGgOAq-mizqwM_KveBQ2pWdyglPVdM/view?usp=sharing)
17
+
18
+
19
+ # To replicate,
20
 
21
  1. First, clone the [official implementation](https://github.com/horseee/LLM-Pruner) and run:
22
  ```
 
38
  2. Then, to post-train, follow the official implementation, [section 2](https://github.com/horseee/LLM-Pruner?tab=readme-ov-file#2-post-training-recover-stage)
39
 
40
 
 
 
 
41
  # Benchmark Results
42
 
43
  **Benchmark Evaluation**: