Garmin AI Coach

Your personalized AI fitness assistant powered by Garmin Connect data. Analyse your activities, track progress, and get intelligent coaching recommendations through a conversational interface.

Features

  • πŸƒβ€β™‚οΈ Activity Analysis: Query your Garmin Connect activities using natural language
  • πŸ’¬ Conversational AI: Powered by state-of-the-art language models
  • πŸ“Š Progress Tracking: Monitor your fitness journey over time
  • πŸ” Multi-User Support: Secure authentication with per-user data isolation
  • ☁️ Cloud Storage: Firestore backend for reliable data persistence

Deployment Guide

This guide explains how to deploy the Garmin AI Coach application to HuggingFace Spaces.

Prerequisites

  1. HuggingFace Account: Sign up if you don't have one
  2. Google Cloud Service Account Key: See Terraform setup
  3. HuggingFace CLI: Install with pip install huggingface_hub[cli]

Quick Start

Option A: Automated Deployment (Recommended)

Use the deployment script from repository root:

# Generate requirements.txt
./infrastructure/deployment/scripts/generate-requirements.sh

# Deploy to HuggingFace Spaces
./infrastructure/deployment/scripts/deploy-to-hf.sh

Option B: Manual Deployment

  1. Authenticate with HuggingFace:
huggingface-cli login
  1. Create a private Space:

    • Go to HuggingFace Spaces
    • Click "Create new Space"
    • Set name: garmin-agent (or your preferred name)
    • Choose SDK: Gradio
    • Visibility: Private
  2. Prepare deployment files:

# From repository root
cd infrastructure/deployment/scripts
./generate-requirements.sh

# Copy files to repository root
cp infrastructure/deployment/huggingface/app.py ./app.py
cp infrastructure/deployment/huggingface/requirements.txt ./requirements.txt
cp infrastructure/deployment/huggingface/README.md ./README.md
  1. Deploy to Space:
# Clone your Space repository
git clone https://huggingface.co/spaces/YOUR_USERNAME/garmin-agent
cd garmin-agent

# Copy application files and workspace
cp /path/to/repo/app.py ./
cp /path/to/repo/requirements.txt ./
cp /path/to/repo/README.md ./
cp -r /path/to/repo/packages ./
cp -r /path/to/repo/services ./

# Commit and push
git add .
git commit -m "Initial deployment"
git push
  1. Configure Secrets and Variables:

    Go to your Space Settings:

    • Settings β†’ Secrets (for sensitive values)
    • Settings β†’ Variables (for non-sensitive configuration)

    Required Secret (Settings β†’ Secrets):

    • GOOGLE_CREDENTIALS_JSON: Paste the entire contents of your service account key JSON file

      {
        "type": "service_account",
        "project_id": "savvy-bit-472903-g9",
        ...
      }
      

    Required Variables (Settings β†’ Variables):

    • DATABASE_TYPE=firestore
    • GOOGLE_CLOUD_PROJECT=savvy-bit-472903-g9
    • ENABLE_AUTH=true
    • ENVIRONMENT=production
    • CHAT_AGENT_MODEL=hf:meta-llama/Llama-3.2-3B-Instruct

    Optional Variables:

    • HUGGINGFACE_HUB_TOKEN: Your HF token (required for HF models)
    • TELEMETRY_BACKEND=disabled: Telemetry configuration
  2. Restart Space: After configuring secrets, restart your Space from the Settings page.

File Structure for Deployment

HuggingFace Spaces requires the following structure at repository root:

repository-root/
β”œβ”€β”€ app.py                    # Entry point (from infrastructure/deployment/huggingface/app.py)
β”œβ”€β”€ requirements.txt          # Generated dependencies
β”œβ”€β”€ README.md                 # This file with HF metadata header
β”œβ”€β”€ packages/                 # Full workspace structure
β”‚   β”œβ”€β”€ ai-core/
β”‚   └── shared-config/
└── services/
    β”œβ”€β”€ cli/
    └── web-app/

Important: Deploy the entire workspace structure to maintain package imports and dependencies.

Environment Variables Reference

Variable Required Description Example
GOOGLE_CREDENTIALS_JSON Yes (Secret) Service account key JSON content See Terraform outputs
DATABASE_TYPE Yes Database backend type firestore
GOOGLE_CLOUD_PROJECT Yes GCP project ID savvy-bit-472903-g9
ENABLE_AUTH Yes Enable multi-user authentication true
ENVIRONMENT Yes Deployment environment production
CHAT_AGENT_MODEL Yes AI model specification hf:meta-llama/Llama-3.2-3B-Instruct
HUGGINGFACE_HUB_TOKEN Conditional HF token for HF models hf_xxxxx
TELEMETRY_BACKEND No Telemetry configuration disabled

Monitoring and Troubleshooting

View Application Logs

In your Space:

  1. Go to your Space page
  2. Click "Logs" tab
  3. Monitor startup messages and errors

Common Issues

1. "Missing required environment variables"

  • Solution: Verify all required variables are set in Settings β†’ Variables
  • Check secret GOOGLE_CREDENTIALS_JSON is set in Settings β†’ Secrets

2. "Failed to parse GOOGLE_CREDENTIALS_JSON"

  • Solution: Ensure the secret contains valid JSON (entire service account key file)
  • Verify no extra quotes or formatting around the JSON content

3. "Failed to import application modules"

  • Solution: Ensure full workspace structure (packages/, services/) is deployed
  • Verify requirements.txt includes all dependencies

4. "Firestore connection failed"

  • Solution: Verify service account has roles/datastore.user permission
  • Check GOOGLE_CLOUD_PROJECT matches your Firestore project
  • Confirm Firestore database exists in your GCP project

5. "Model not found" or authentication errors

  • Solution: For HF models, set HUGGINGFACE_HUB_TOKEN in Variables
  • For OpenAI models, set OPENAI_API_KEY
  • For Anthropic models, set ANTHROPIC_API_KEY

Testing the Deployment

After deployment:

  1. Visit your Space URL: https://huggingface.co/spaces/YOUR_USERNAME/garmin-agent
  2. Wait for the Space to build and start (first start takes 2-3 minutes)
  3. Register a new user account
  4. Test the chat interface with simple queries
  5. Verify Firestore connection by checking data persistence

Updating the Application

To update your deployed application:

  1. Update code locally and test

  2. Regenerate requirements.txt if dependencies changed:

    ./infrastructure/deployment/scripts/generate-requirements.sh
    
  3. Copy updated files to Space repository

  4. Commit and push changes

  5. HF Spaces will automatically rebuild and restart

Security Best Practices

  1. Keep Space Private: Set visibility to "Private" for production
  2. Rotate Service Account Keys: Follow GCP key rotation guidelines
  3. Use Secrets for Credentials: Never commit credentials to repository
  4. Monitor Access Logs: Review Space access logs regularly
  5. Enable Authentication: Always deploy with ENABLE_AUTH=true

Performance Optimization

  • Model Selection: Smaller models (e.g., Llama-3.2-3B) start faster and use less memory
  • Cold Start: First request after inactivity may take 30-60 seconds
  • Firestore Region: Database in australia-southeast1 optimises latency for APAC users
  • Space Hardware: Upgrade to GPU Space for better performance with larger models

Support

For issues specific to:


Note: This deployment uses HuggingFace Spaces' native Gradio SDK support. The platform automatically handles server configuration, port binding, and SSL certificates.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support