Nifdi01 commited on
Commit
39a762e
1 Parent(s): df2ff90

instructions added

Browse files
Files changed (1) hide show
  1. README.md +105 -12
README.md CHANGED
@@ -22,34 +22,127 @@ size_categories:
22
  ---
23
 
24
  ![](https://i.ibb.co/bs9ktP4/logo.jpg)
 
25
  # ButaBytes 2.0 - The largest NLP corpus for Azerbaijani Language (43M+ sentences)
26
 
27
  ButaBytes is designed for a wide range of NLP tasks, collected from 3 million sources with a diverse range of genres and topics, such as politics, economics, science, culture, sports, history, and society. The documents include a mix of contemporary and historical texts, drawn from newspapers, magazines, academic journals, Wikipedia articles, and books. This mix provides a comprehensive linguistic and cultural resource for NLP technologies.
28
 
29
  ## Corpus Structure
30
  ### Data Splits
31
- The ButaBytes corpus has 4 main sources (books, wikipedia, news and sentences) with the following distribution:
32
- | Source Name | Number of Instances | Size (GB) |
33
- | ------------- | ------------------- | --------- |
34
- | sentences.json | 43,755,942 | 10.1 |
35
- | wikipedia.json | 178,836 | 0.64 |
36
- | news.json | 623,964 | 1.37 |
37
- | books.zip | 434 | 0.12 |
 
38
 
39
  ## Methodology
40
 
41
  The ButaBytes corpus was constructed by scraping a wide array of Azerbaijani content to ensure a comprehensive and diverse dataset. Our sources included Azerbaijani news websites known for their popularity and reliability, public documents, books spanning various genres, and a rich selection of user-generated content such as social media posts and blogs. We implemented specialized cleaning techniques tailored to each content type, enhancing the accuracy and consistency of the data across the corpus. This approach guarantees a robust and versatile resource suited for a multitude of NLP applications.
42
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
43
  ## Considerations for Using the Corpus
44
 
45
- #### Social Impact
46
 
47
  ButaBytes contributes significantly to the NLP research community by providing a valuable resource for developing text generation tools in Azerbaijani. It not only supports the advancement of language technologies but also promotes linguistic diversity and cultural preservation.
48
 
49
- #### Biases and Limitations
50
 
51
  While efforts were made to minimize bias in the corpus, some limitations remain. Users should be cautious with models trained on this data, particularly concerning inherent biases that might influence the performance and fairness of these models.
52
 
53
- #### Corpus Authors
54
- The ButaBytes 2.0 was developed by the Tifosi AI (formerly AZNLP), a group of dedicated researchers and data scientists focused on advancing Artificial Intelligence. The team has committed to ethical sourcing and responsible management of the dataset, ensuring it serves as a reliable and valuable resource for the community.
55
- [More Information Needed]
 
22
  ---
23
 
24
  ![](https://i.ibb.co/bs9ktP4/logo.jpg)
25
+
26
  # ButaBytes 2.0 - The largest NLP corpus for Azerbaijani Language (43M+ sentences)
27
 
28
  ButaBytes is designed for a wide range of NLP tasks, collected from 3 million sources with a diverse range of genres and topics, such as politics, economics, science, culture, sports, history, and society. The documents include a mix of contemporary and historical texts, drawn from newspapers, magazines, academic journals, Wikipedia articles, and books. This mix provides a comprehensive linguistic and cultural resource for NLP technologies.
29
 
30
  ## Corpus Structure
31
  ### Data Splits
32
+ The ButaBytes corpus has 4 main sources (books, wikipedia, news, and sentences) with the following distribution:
33
+
34
+ | Source Name | Number of Instances | Size (GB) |
35
+ | ---------------- | ------------------- | --------- |
36
+ | sentences.json | 43,755,942 | 10.1 |
37
+ | wikipedia.json | 178,836 | 0.64 |
38
+ | news.json | 623,964 | 1.37 |
39
+ | books.zip | 434 | 0.12 |
40
 
41
  ## Methodology
42
 
43
  The ButaBytes corpus was constructed by scraping a wide array of Azerbaijani content to ensure a comprehensive and diverse dataset. Our sources included Azerbaijani news websites known for their popularity and reliability, public documents, books spanning various genres, and a rich selection of user-generated content such as social media posts and blogs. We implemented specialized cleaning techniques tailored to each content type, enhancing the accuracy and consistency of the data across the corpus. This approach guarantees a robust and versatile resource suited for a multitude of NLP applications.
44
 
45
+ ## Usage Instructions
46
+
47
+ In order to use ButaBytes, you should download the corresponding material manually to your device.
48
+
49
+ ### Reading JSON Files
50
+
51
+ To read the JSON files from the dataset, use the following function:
52
+
53
+ ```python
54
+ import json
55
+
56
+ def read_local_json(file_path):
57
+ try:
58
+ with open(file_path, 'r') as file:
59
+ data = json.load(file)
60
+ print(f"Successfully loaded JSON data from {file_path}.")
61
+ return data
62
+ except json.JSONDecodeError:
63
+ print("The file is not a valid JSON.")
64
+ return None
65
+ except FileNotFoundError:
66
+ print("The file was not found. Please ensure the file path is correct.")
67
+ return None
68
+
69
+ # Example usage
70
+ file_path = "sentences.json"
71
+ data = read_local_json(file_path)
72
+ ```
73
+
74
+ ### Converting JSON Data to DataFrame
75
+
76
+ ```python
77
+ import pandas as pd
78
+
79
+ file_path = "news.json"
80
+ data = read_local_json(file_path)
81
+ df = pd.DataFrame(data)
82
+ print(df.head())
83
+ ```
84
+
85
+ ```python
86
+ import pandas as pd
87
+
88
+ file_path = "wikipedia.json"
89
+ data = read_local_json(file_path)
90
+ df = pd.DataFrame(data)
91
+ print(df.head())
92
+ ```
93
+
94
+ ### Unzipping and Reading Text Files
95
+
96
+ ```python
97
+ import os
98
+ import zipfile
99
+ import glob
100
+ import pandas as pd
101
+
102
+ def unzip_file(zip_path, extract_to):
103
+ if not os.path.exists(extract_to):
104
+ os.makedirs(extract_to)
105
+
106
+ with zipfile.ZipFile(zip_path, 'r') as zip_ref:
107
+ zip_ref.extractall(extract_to)
108
+ print(f"Extracted all files from {zip_path} to {extract_to}")
109
+
110
+ # Example usage
111
+ zip_path = "books.zip"
112
+ extract_to = "books"
113
+ unzip_file(zip_path, extract_to)
114
+
115
+ def read_text_files_into_dataframe(root_folder):
116
+ all_text_files = glob.glob(os.path.join(root_folder, '**/*.txt'), recursive=True)
117
+ data = []
118
+
119
+ for file_path in all_text_files:
120
+ with open(file_path, 'r', encoding='utf-8') as file:
121
+ content = file.read()
122
+ data.append({
123
+ 'file_path': file_path,
124
+ 'content': content
125
+ })
126
+
127
+ df = pd.DataFrame(data)
128
+ return df
129
+
130
+ # Example usage
131
+ root_folder = "books"
132
+ df = read_text_files_into_dataframe(root_folder)
133
+ print(df.head())
134
+ ```
135
+
136
  ## Considerations for Using the Corpus
137
 
138
+ ### Social Impact
139
 
140
  ButaBytes contributes significantly to the NLP research community by providing a valuable resource for developing text generation tools in Azerbaijani. It not only supports the advancement of language technologies but also promotes linguistic diversity and cultural preservation.
141
 
142
+ ### Biases and Limitations
143
 
144
  While efforts were made to minimize bias in the corpus, some limitations remain. Users should be cautious with models trained on this data, particularly concerning inherent biases that might influence the performance and fairness of these models.
145
 
146
+ ### Corpus Authors
147
+
148
+ The ButaBytes 2.0 was developed by the Tifosi AI (formerly AZNLP), a group of dedicated researchers and data scientists focused on advancing Artificial Intelligence. The team has committed to ethical sourcing and responsible management of the dataset, ensuring it serves as a reliable and valuable resource for the community.