license: mit
Code Qualiy Evaluation Dataset
Welcome to the repository for our research paper: T. Wang and Z. Chen, "Analyzing Code Text Strings for Code Evaluation," 2023 IEEE International Conference on Big Data (BigData), Sorrento, Italy, 2023, pp. 5619-5628, doi: 10.1109/BigData59044.2023.10386406.
Contents
This repository contains the following:
- License
- Dataset
- Fine-tuned Model
Model Info
There are three BERT models, each fine-tuned on a dataset of 70,000 Python 3 solutions submitted by users for problems #1 through #100 on LeetCode:
bert_lc100_hp25
: This model classifies code based on the 25th percentile as its threshold. It is designed for identifying lower quartile code solutions in terms of quality or performance.bert_lc100_hp50
: Operating with a median-based approach, this model uses the 50th percentile as its classification threshold. It is suitable for general assessments, providing a balanced view of code quality.bert_lc100_regression
: Unlike the others, this is a regression model. It provides a nuanced prediction of the overall code quality score, offering a more detailed evaluation compared to the binary classification approach.
Model Usage
Installation First, ensure you have the latest version of the tf-models-official package. You can install it using the following command:``` pip install -q tf-models-official
**Loading the Model**
To utilize the bert_lc100_regression model within TensorFlow, follow these steps:
import tensorflow as tf import tensorflow_text as text model = tf.keras.models.load_model('saved_model/bert_lc100_regression/', compile=False)
**Making Predictions**
To assess the quality of code, given that `X_test` contains a list of code strings, use the model to predict as follows:
y_pred = model.predict(X_test)
## Reference
If you found the dataset useful in your research or applications, please cite using the following BibTeX:
@INPROCEEDINGS{10386406, author={Wang, Tianyu and Chen, Zhixiong}, booktitle={2023 IEEE International Conference on Big Data (BigData)}, title={Analyzing Code Text Strings for Code Evaluation}, year={2023}, volume={}, number={}, pages={5619-5628}, keywords={Measurement;Deep learning;Codes;Bidirectional control;Organizations;Transformers;Software;code assessment;code annotation;deep learning;nature language processing;software assurance;code security}, doi={10.1109/BigData59044.2023.10386406} }