File size: 1,529 Bytes
d1fc5c4 085fe19 d1fc5c4 085fe19 d1fc5c4 085fe19 d1fc5c4 085fe19 d1fc5c4 085fe19 d1fc5c4 085fe19 d1fc5c4 085fe19 d1fc5c4 085fe19 d1fc5c4 085fe19 d1fc5c4 085fe19 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 |
---
license: apache-2.0
datasets:
- lizziepika/strava_activities_runs
- Lukamac/MegaGym_dataset
language:
- en
metrics:
- perplexity
- accuracy
base_model:
- openai-community/gpt2
pipeline_tag: text-generation
library_name: transformers
tags:
- gpt2
- text-generation
- sports
- fitness
- gym
---
# PlayPart AI Personal Trainer Model
This model is a fine-tuned version of GPT-2, specifically trained on sports-related and gym exercise datasets. It is intended to provide text-generation capabilities for answering questions about fitness, sports, workout routines, and providing personalized training suggestions.
## Intended Use
- **Text Generation**: Generate text based on sports and fitness questions and interactions.
- **Personal Trainer Chatbot**: Suitable for chatbot integrations focused on fitness, workouts, and sports topics.
### Usage
To use the model, you can either use the Hugging Face Inference API or load it in your Python environment.
#### Example (Python)
```python
from transformers import GPT2Tokenizer, GPT2LMHeadModel
# Load the model
tokenizer = GPT2Tokenizer.from_pretrained("Lukamac/PlayPart-AI-Personal-Trainer")
model = GPT2LMHeadModel.from_pretrained("Lukamac/PlayPart-AI-Personal-Trainer")
# Generate a response
input_text = "What are the best exercises for building upper body strength?"
input_ids = tokenizer.encode(input_text, return_tensors='pt')
output_ids = model.generate(input_ids, max_length=50)
response = tokenizer.decode(output_ids[0], skip_special_tokens=True)
print(response) |