watashihakobashi commited on
Commit
24450fb
·
verified ·
1 Parent(s): e7b2f2c

Update README_en.md

Browse files
Files changed (1) hide show
  1. README_en.md +3 -4
README_en.md CHANGED
@@ -24,7 +24,7 @@ The model was pre-trained using the following corpora, with a total of 65 billio
24
  * Japanese data from [OSCAR](https://huggingface.co/datasets/oscar)
25
  * Japanese and English dump data from Wikipedia ([Japanese Main Page](https://ja.wikipedia.org/wiki/%E3%83%A1%E3%82%A4%E3%83%B3%E3%83%9A%E3%83%BC%E3%82%B8), [English Main Page](https://en.wikipedia.org/wiki/Main_Page))
26
  * Proprietary company data
27
- *
28
  ## How to Use
29
  ```python
30
  import torch
@@ -81,6 +81,5 @@ It takes time to get back on top!
81
  ```
82
 
83
  ### How to Run on AWS inf2.xlarge
84
-
85
- As of January 24, 2024, [AWS inf2 instances](https://aws.amazon.com/ec2/instance-types/inf2/) offer a cost-effective solution for operating models with over 10 billion parameters compared to GPU instances.
86
- The model and source code can be found [here](https://huggingface.co/watashiha/Watashiha-Llama-2-13B-Ogiri-sft-neuron).
 
24
  * Japanese data from [OSCAR](https://huggingface.co/datasets/oscar)
25
  * Japanese and English dump data from Wikipedia ([Japanese Main Page](https://ja.wikipedia.org/wiki/%E3%83%A1%E3%82%A4%E3%83%B3%E3%83%9A%E3%83%BC%E3%82%B8), [English Main Page](https://en.wikipedia.org/wiki/Main_Page))
26
  * Proprietary company data
27
+
28
  ## How to Use
29
  ```python
30
  import torch
 
81
  ```
82
 
83
  ### How to Run on AWS inf2.xlarge
84
+ As of January 24, 2024, [AWS inf2 instances](https://aws.amazon.com/ec2/instance-types/inf2/) offer a cost-effective solution for operating models with over 10 billion parameters compared to GPU instances.
85
+ The model and source code can be found [here](https://huggingface.co/watashiha/Watashiha-Llama-2-13B-Ogiri-sft-neuron).