Garmin AI Coach
Your personalized AI fitness assistant powered by Garmin Connect data. Analyse your activities, track progress, and get intelligent coaching recommendations through a conversational interface.
Features
- πββοΈ Activity Analysis: Query your Garmin Connect activities using natural language
- π¬ Conversational AI: Powered by state-of-the-art language models
- π Progress Tracking: Monitor your fitness journey over time
- π Multi-User Support: Secure authentication with per-user data isolation
- βοΈ Cloud Storage: Firestore backend for reliable data persistence
Deployment Guide
This guide explains how to deploy the Garmin AI Coach application to HuggingFace Spaces.
Prerequisites
- HuggingFace Account: Sign up if you don't have one
- Google Cloud Service Account Key: See Terraform setup
- HuggingFace CLI: Install with
pip install huggingface_hub[cli]
Quick Start
Option A: Automated Deployment (Recommended)
Use the deployment script from repository root:
# Generate requirements.txt
./infrastructure/deployment/scripts/generate-requirements.sh
# Deploy to HuggingFace Spaces
./infrastructure/deployment/scripts/deploy-to-hf.sh
Option B: Manual Deployment
- Authenticate with HuggingFace:
huggingface-cli login
Create a private Space:
- Go to HuggingFace Spaces
- Click "Create new Space"
- Set name:
garmin-agent
(or your preferred name) - Choose SDK: Gradio
- Visibility: Private
Prepare deployment files:
# From repository root
cd infrastructure/deployment/scripts
./generate-requirements.sh
# Copy files to repository root
cp infrastructure/deployment/huggingface/app.py ./app.py
cp infrastructure/deployment/huggingface/requirements.txt ./requirements.txt
cp infrastructure/deployment/huggingface/README.md ./README.md
- Deploy to Space:
# Clone your Space repository
git clone https://huggingface.co/spaces/YOUR_USERNAME/garmin-agent
cd garmin-agent
# Copy application files and workspace
cp /path/to/repo/app.py ./
cp /path/to/repo/requirements.txt ./
cp /path/to/repo/README.md ./
cp -r /path/to/repo/packages ./
cp -r /path/to/repo/services ./
# Commit and push
git add .
git commit -m "Initial deployment"
git push
Configure Secrets and Variables:
Go to your Space Settings:
- Settings β Secrets (for sensitive values)
- Settings β Variables (for non-sensitive configuration)
Required Secret (Settings β Secrets):
GOOGLE_CREDENTIALS_JSON
: Paste the entire contents of your service account key JSON file{ "type": "service_account", "project_id": "savvy-bit-472903-g9", ... }
Required Variables (Settings β Variables):
DATABASE_TYPE=firestore
GOOGLE_CLOUD_PROJECT=savvy-bit-472903-g9
ENABLE_AUTH=true
ENVIRONMENT=production
CHAT_AGENT_MODEL=hf:meta-llama/Llama-3.2-3B-Instruct
Optional Variables:
HUGGINGFACE_HUB_TOKEN
: Your HF token (required for HF models)TELEMETRY_BACKEND=disabled
: Telemetry configuration
Restart Space: After configuring secrets, restart your Space from the Settings page.
File Structure for Deployment
HuggingFace Spaces requires the following structure at repository root:
repository-root/
βββ app.py # Entry point (from infrastructure/deployment/huggingface/app.py)
βββ requirements.txt # Generated dependencies
βββ README.md # This file with HF metadata header
βββ packages/ # Full workspace structure
β βββ ai-core/
β βββ shared-config/
βββ services/
βββ cli/
βββ web-app/
Important: Deploy the entire workspace structure to maintain package imports and dependencies.
Environment Variables Reference
Variable | Required | Description | Example |
---|---|---|---|
GOOGLE_CREDENTIALS_JSON |
Yes (Secret) | Service account key JSON content | See Terraform outputs |
DATABASE_TYPE |
Yes | Database backend type | firestore |
GOOGLE_CLOUD_PROJECT |
Yes | GCP project ID | savvy-bit-472903-g9 |
ENABLE_AUTH |
Yes | Enable multi-user authentication | true |
ENVIRONMENT |
Yes | Deployment environment | production |
CHAT_AGENT_MODEL |
Yes | AI model specification | hf:meta-llama/Llama-3.2-3B-Instruct |
HUGGINGFACE_HUB_TOKEN |
Conditional | HF token for HF models | hf_xxxxx |
TELEMETRY_BACKEND |
No | Telemetry configuration | disabled |
Monitoring and Troubleshooting
View Application Logs
In your Space:
- Go to your Space page
- Click "Logs" tab
- Monitor startup messages and errors
Common Issues
1. "Missing required environment variables"
- Solution: Verify all required variables are set in Settings β Variables
- Check secret
GOOGLE_CREDENTIALS_JSON
is set in Settings β Secrets
2. "Failed to parse GOOGLE_CREDENTIALS_JSON"
- Solution: Ensure the secret contains valid JSON (entire service account key file)
- Verify no extra quotes or formatting around the JSON content
3. "Failed to import application modules"
- Solution: Ensure full workspace structure (packages/, services/) is deployed
- Verify requirements.txt includes all dependencies
4. "Firestore connection failed"
- Solution: Verify service account has
roles/datastore.user
permission - Check
GOOGLE_CLOUD_PROJECT
matches your Firestore project - Confirm Firestore database exists in your GCP project
5. "Model not found" or authentication errors
- Solution: For HF models, set
HUGGINGFACE_HUB_TOKEN
in Variables - For OpenAI models, set
OPENAI_API_KEY
- For Anthropic models, set
ANTHROPIC_API_KEY
Testing the Deployment
After deployment:
- Visit your Space URL:
https://huggingface.co/spaces/YOUR_USERNAME/garmin-agent
- Wait for the Space to build and start (first start takes 2-3 minutes)
- Register a new user account
- Test the chat interface with simple queries
- Verify Firestore connection by checking data persistence
Updating the Application
To update your deployed application:
Update code locally and test
Regenerate requirements.txt if dependencies changed:
./infrastructure/deployment/scripts/generate-requirements.sh
Copy updated files to Space repository
Commit and push changes
HF Spaces will automatically rebuild and restart
Security Best Practices
- Keep Space Private: Set visibility to "Private" for production
- Rotate Service Account Keys: Follow GCP key rotation guidelines
- Use Secrets for Credentials: Never commit credentials to repository
- Monitor Access Logs: Review Space access logs regularly
- Enable Authentication: Always deploy with
ENABLE_AUTH=true
Performance Optimization
- Model Selection: Smaller models (e.g., Llama-3.2-3B) start faster and use less memory
- Cold Start: First request after inactivity may take 30-60 seconds
- Firestore Region: Database in
australia-southeast1
optimises latency for APAC users - Space Hardware: Upgrade to GPU Space for better performance with larger models
Support
For issues specific to:
- HuggingFace Spaces: HF Spaces Documentation
- Firestore: Firestore Documentation
- Application Issues: See repository issues or documentation
Note: This deployment uses HuggingFace Spaces' native Gradio SDK support. The platform automatically handles server configuration, port binding, and SSL certificates.