romrawinjp commited on
Commit
13857fe
1 Parent(s): 83df6ab

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -2
README.md CHANGED
@@ -8,9 +8,16 @@ language:
8
  pipeline_tag: text-generation
9
  tags:
10
  - code_generation
 
11
  ---
12
 
13
- Example inference using huggingface transformers.
 
 
 
 
 
 
14
  ```python
15
  from transformers import AutoModelForCausalLM, AutoTokenizer, LlamaTokenizer
16
  import pandas as pd
@@ -38,4 +45,10 @@ prompt = f"""
38
  tokens = tokenizer(prompt, return_tensors="pt")
39
  output = model.generate(tokens["input_ids"], max_new_tokens=20, eos_token_id=tokenizer.eos_token_id)
40
  print(get_prediction(tokenizer.decode(output[0], skip_special_tokens=True)))
41
- ```
 
 
 
 
 
 
 
8
  pipeline_tag: text-generation
9
  tags:
10
  - code_generation
11
+ - sql
12
  ---
13
 
14
+ # 🤖 [Super AI Engineer Development Program Season 4](https://superai.aiat.or.th/) - Pangpuriye Table-based Question Answering Model
15
+
16
+ ![logo](https://huggingface.co/datasets/AIAT/Pangpuriye-generated_by_typhoon/resolve/main/logo/logo.png)
17
+
18
+ This model was fine-tuned from the original [OpenThaiGPT-1.0.1-7b](https://huggingface.co/openthaigpt/openthaigpt-1.0.0-7b-chat). The model is set under Apache license 2.0.
19
+
20
+ ## Example inference using huggingface transformers.
21
  ```python
22
  from transformers import AutoModelForCausalLM, AutoTokenizer, LlamaTokenizer
23
  import pandas as pd
 
45
  tokens = tokenizer(prompt, return_tensors="pt")
46
  output = model.generate(tokens["input_ids"], max_new_tokens=20, eos_token_id=tokenizer.eos_token_id)
47
  print(get_prediction(tokenizer.decode(output[0], skip_special_tokens=True)))
48
+ ```
49
+
50
+ ## Acknowledgements
51
+
52
+ The dataset is collectively stored by the members of Panguriye's house during the LLMs hackathon in Super AI Engineer Development Program Season 4.
53
+
54
+ We thank the organizers of this hackathon, [OpenThaiGPT](https://openthaigpt.aieat.or.th/), [AIAT](https://aiat.or.th/), [NECTEC](https://www.nectec.or.th/en/) and [ThaiSC](https://thaisc.io/) for this challenging task and opportunity to be a part of developing Thai large language model.