Datasets:
More README
Browse files
README.md
CHANGED
@@ -39,6 +39,29 @@ generation that supports 18 programming languages. It takes the OpenAI
|
|
39 |
translate them to other languages. It is easy to add support for new languages
|
40 |
and benchmarks.
|
41 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
42 |
## Example
|
43 |
|
44 |
The following script uses the Salesforce/codegen model to generate Lua
|
@@ -72,7 +95,7 @@ for problem in problems["test"]:
|
|
72 |
return_tensors="pt",
|
73 |
).input_ids.cuda()
|
74 |
generated_ids = model.generate(
|
75 |
-
input_ids, max_length=
|
76 |
)
|
77 |
truncated_string = stop_at_stop_token(tokenizer.decode(generated_ids[0]), problem)
|
78 |
filename = problem["name"] + "." + LANG
|
|
|
39 |
translate them to other languages. It is easy to add support for new languages
|
40 |
and benchmarks.
|
41 |
|
42 |
+
## Subsets
|
43 |
+
|
44 |
+
For most purposes, you should use the variations called *SRCDATA-LANG*, where
|
45 |
+
*SRCDATA* is either "humaneval" or "mbpp" and *LANG* is one of the supported
|
46 |
+
languages. We use the canonical file extension for each language to identify
|
47 |
+
the language, e.g., "py" for Python, "cpp" for C++, "lua" for Lua, and so on.
|
48 |
+
|
49 |
+
We also provide a few other variations:
|
50 |
+
|
51 |
+
- *SRCDATA-LANG-keep* is the same as *SRCDATA-LANG*, but the text of the prompt
|
52 |
+
is totally unchanged. If the original prompt had Python doctests, they remain
|
53 |
+
as Python instead of being translated to *LANG*. If the original prompt had
|
54 |
+
Python-specific terminology, e.g., "list", it remains "list", instead of
|
55 |
+
being translated, e.g., to "vector" for C++.
|
56 |
+
|
57 |
+
- *SRCDATA-LANG-transform* transforms the doctests to *LANG* but leaves
|
58 |
+
the natural language text of the prompt unchanged.
|
59 |
+
|
60 |
+
- *SRCDATA-LANG-removed* removes the doctests from the prompt.
|
61 |
+
|
62 |
+
Note that MBPP does not have any doctests, so the "removed" and "transform"
|
63 |
+
variations are not available for MBPP.
|
64 |
+
|
65 |
## Example
|
66 |
|
67 |
The following script uses the Salesforce/codegen model to generate Lua
|
|
|
95 |
return_tensors="pt",
|
96 |
).input_ids.cuda()
|
97 |
generated_ids = model.generate(
|
98 |
+
input_ids, max_length=512, pad_token_id=tokenizer.eos_token_id + 2
|
99 |
)
|
100 |
truncated_string = stop_at_stop_token(tokenizer.decode(generated_ids[0]), problem)
|
101 |
filename = problem["name"] + "." + LANG
|