Edit model card

Model: GPT-2

Model name: pbjsGPT2v2

Model description:

This fine-tuned version of the GPT-2 model was trained on a subset of 1100+ publisher domains' Prebid config files. Its focus is on sophisticated Prebid publishers. The model provides insights into how these publishers configure their Prebid settings. By inputting a Prebid config setting, such as bidderTimeout, the model generates sample Prebid configuration settings based on the collected data. It aims to assist publishers in understanding different configurations used by sophisticated publishers.

Intended uses:

This model is intended to assist publishers in understanding and exploring how other publishers configure their Prebid settings. It serves as a reference for gaining insights into common configurations, best practices, and different approaches used by top publishers across various domains.

Limitations:

The generated Prebid configuration settings are based on the data from the training set and may not cover all possible configurations or reflect the specific requirements of a particular domain. Publishers should carefully review and adapt the generated configurations to their specific needs and business rules.

How to use:

To use this model, provide a Prebid config setting, such as bidderSequence. The model will generate a sample Prebid configuration related to that input based on the collected data.

Training data:

This model was trained on a subset of 1100+ publisher domains Prebid config files. The dataset was collected from a variety of publishers and represents a wide range of Prebid settings used in the industry.

Training procedure:

The model was fine-tuned using the GPT-2 base model with the aforementioned dataset.

Evaluation results:

The evaluation of this model focuses on its ability to generate coherent and valid Prebid configuration settings based on the provided Prebid config setting. Human evaluators reviewed the generated configurations for relevance and accuracy.

Safety and bias considerations:

The model is trained on data from actual Prebid config files and aims to provide accurate insights into publishers' configurations. However, it's important to note that biases may exist in the original data itself, as the training data is based on real-world configurations. Users should review and validate the generated configurations to ensure they align with their specific requirements and guidelines.

Users are encouraged to exercise caution and use their expertise in interpreting and adapting the generated Prebid configurations for their own use. The model should be seen as a helpful tool to gain inspiration and understanding of common Prebid settings but not as a substitute for thorough testing and manual review of the final configurations.

Downloads last month
13
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Space using PeterBrendan/pbjsGPT2v2 1