asiansoul commited on
Commit
56ea39b
1 Parent(s): 660fa5c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -40,11 +40,11 @@ I'll find the answer for you.
40
 
41
  Soon real PoSE to extend Llama's context length to 64k with using my merge method : "reborn"[reborn](https://medium.com/@puffanddmx82/reborn-elevating-model-adaptation-with-merging-for-superior-nlp-performance-f604e8e307b2)
42
 
43
- I have found that most merges so far do not actually have 64k in their configs. I will improve it in the next merge.
44
 
45
- 256k is not possible. My computer is running out of memory. If you support me, i will try it on a computer with maximum specifications.
46
 
47
- If you support me, i would like to conduct great tests by building a network with high-capacity traffic and high-speed 10G speeds for you.
48
 
49
  ### Merge Method
50
 
 
40
 
41
  Soon real PoSE to extend Llama's context length to 64k with using my merge method : "reborn"[reborn](https://medium.com/@puffanddmx82/reborn-elevating-model-adaptation-with-merging-for-superior-nlp-performance-f604e8e307b2)
42
 
43
+ I have found that most of merge's model outside so far do not actually have 64k in their configs. I will improve it in the next merge.
44
 
45
+ 256k is not possible. My computer is running out of memory. If you support me, i will try it on a computer with maximum specifications,
46
 
47
+ also, i would like to conduct great tests by building a network with high-capacity traffic and high-speed 10G speeds for you.
48
 
49
  ### Merge Method
50