Model: GPT-2
Model name: pbjs_gpt2
Model description: This fine-tuned version of the GPT-2 model was trained on a dataset of 1100+ publisher domains' Prebid config files. It aims to provide insights into how other publishers configure their Prebid settings. Given a Prebid config setting, such as bidderTimeout, the model can generate sample Prebid configuration settings based on the collected data. It helps publishers gain an understanding of how different publishers configure their Prebid settings.
Intended uses: This model is intended to assist publishers in understanding and exploring how other publishers configure their Prebid settings. It serves as a reference to gain insights into common configurations, best practices, and different approaches used by publishers across various domains.
Limitations: It's important to note that the generated Prebid configuration settings are based on the data from the training set and may not cover all possible configurations or reflect the specific requirements of a particular domain. Publishers should carefully review and adapt the generated configurations to their specific needs and business rules.
How to use: To use this model, provide a Prebid config setting, such as bidderSequence. The model will generate a sample Prebid configuration related to that input based on the collected data.
Training data: This model was trained on a dataset consisting of over 1100+ publisher domains Prebid config files. The dataset was collected from a variety of publishers and represents a wide range of Prebid settings used in the industry.
Training procedure: The model was fine-tuned using the GPT-2 base model with the aforementioned dataset. The training loss achieved was 0.43277667846199475.
Evaluation results: The evaluation of this model focuses on its ability to generate coherent and valid Prebid configuration settings based on the provided Prebid config setting. Human evaluators reviewed the generated configurations for relevance and accuracy.
Safety and bias considerations: The model is trained on data from actual Prebid config files and aims to provide accurate insights into publishers' configurations. However, it's important to note that biases may exist in the original data itself, as the training data is based on real-world configurations. Users should review and validate the generated configurations to ensure they align with their specific requirements and guidelines.
Users are encouraged to exercise caution and use their expertise in interpreting and adapting the generated Prebid configurations for their own use. The model should be seen as a helpful tool to gain inspiration and understanding of common Prebid settings but not as a substitute for thorough testing and manual review of the final configurations.
- Downloads last month
- 17