A newer version of the Streamlit SDK is available:
1.40.2
title: Computer Vision Playground
emoji: 🦀
colorFrom: indigo
colorTo: blue
sdk: streamlit
sdk_version: 1.36.0
app_file: app.py
pinned: false
license: mit
Computer Vision Playground
This Streamlit application streams video from the webcam, analyzes facial sentiment, and displays the results in real-time. It serves as a playground for computer vision projects, with an example facial sentiment analysis demo.
For how to embed this space into an existing webpage see the example index.html page. For further instructions and guidance from Hugging Face see here
How to Use
- Clone the repository.
- Ensure you have the necessary packages installed:
pip install -r requirements.txt
- Run the application:
streamlit run app.py
Create Your Own Analysis Space
Follow these steps to set up and modify the application for your own image analysis:
Step 1: Clone the Repository
First, you need to clone the repository to your local machine. Open your terminal or command prompt and run:
git clone https://huggingface.co/spaces/eusholli/computer-vision-playground
cd computer-vision-playground
Step 2: Install Dependencies
Make sure you have Python installed on your machine. You can download it from python.org.
Next, install the required packages. In the terminal, navigate to the cloned repository directory and run:
pip install -r requirements.txt
This will install all the necessary libraries specified in the `requirements.txt` file.
Step 3: Run the Application
To start the Streamlit application, run:
streamlit run app.py
This will open a new tab in your default web browser with the Streamlit interface.
Using the Application
Webcam Stream
- Allow access to your webcam when prompted.
- You will see the live stream from your webcam in the "Input Stream" section.
- The application will analyze the video frames in real-time and display the sentiment results in the "Analysis" section.
Uploading Images
- In the "Input Stream" section, under "Upload an Image", click on the "Choose an image..." button.
- Select an image file (jpg, jpeg, png) from your computer.
- The application will analyze the uploaded image and display the analysis results.
Image URL
- In the "Input Stream" section, under "Or Enter Image URL", paste an image URL and press Enter.
- The application will download and analyze the image from the provided URL and display the analysis results.
YouTube URL
- In the "Youtube URL" section, under "Enter a YouTube URL", paste a YouTube URL and press Enter.
- The application will stream and analyze the video directly from YouTube and display the analysis results.
Uploading Videos
- In the "Input Stream" section, under "Upload a Video", click on the "Choose a video..." button.
- Select a video file (mp4, avi, mov, mkv) from your computer.
- The application will analyze the video frames and display the analysis results.
Video URL
- In the "Input Stream" section, under "Or Enter Video Download URL", paste a video URL and press Enter.
- The application will download and analyze the video from the provided URL and display the analysis results.
Customize the Analysis
You can customize the analysis function to perform your own image analysis. The default function `analyze_frame` performs facial sentiment analysis. To use your own analysis:
- Replace the contents of the `analyze_frame` function in `app.py` with your custom analysis code.
- Update any necessary imports at the top of the `app.py` file.
- Adjust the `ANALYSIS_TITLE` variable to reflect your custom analysis.
Example:
ANALYSIS_TITLE = "Custom Analysis"
def analyze_frame(frame: np.ndarray):
# Your custom analysis code here
...
Troubleshooting
If you encounter any issues:
- Ensure all dependencies are correctly installed.
- Check that your webcam is working and accessible.
- Verify the URLs you provide are correct and accessible.
For more detailed information, refer to the comments in the `app.py` file.
Debugging using Vscode
If you are using Vscode as your IDE you can use the following launch.json file to debug the current file (e.g. app.py) in your editor.
{
"version": "0.2.0",
"configurations": [
{
"name": "Python:Streamlit",
"type": "debugpy",
"request": "launch",
"module": "streamlit",
"args": [
"run",
"${file}",
"--server.port",
"2000"
]
}
]
}
How to Create a New Huggingface Space and Push Code to It
Step 1: Create a New Huggingface Space
- Log in to your Huggingface account.
- Go to the Spaces section.
- Click on the Create new Space button.
- Fill in the details for your new Space:
- Space name: Choose a unique name for your Space.
- Owner: Ensure your username is selected.
- Visibility: Choose between Public or Private based on your preference.
- SDK: Select the SDK you will use (in this case
streamlit
).
- Click on the Create Space button to create your new Space.
Step 2: Change the Local Git Remote Repo Reference
- Open your terminal or command prompt.
- Navigate to your local project directory:
cd /path/to/your/project
- Remove the existing remote reference (if any):
git remote remove origin
- Add the new remote reference pointing to your newly created Huggingface Space. Replace
<your-username>
and<your-space-name>
with your actual Huggingface username and Space name:git remote add origin https://huggingface.co/spaces/<your-username>/<your-space-name>.git
Step 3: Add, Commit, and Push the Code to the New Space
- Stage all the changes in your local project directory:
git add .
- Commit the changes with a meaningful commit message:
git commit -m "Initial commit to Huggingface Space"
- Push the changes to the new Huggingface Space:
git push origin main
Note: If your default branch is not
main
, replacemain
with the appropriate branch name in the push command.
Conclusion
You have now successfully created a new Huggingface Space, updated your local Git remote reference, and pushed your code to the new Space. You can verify that your code has been uploaded by visiting your Huggingface Space's URL.
Webcam STUN/TURN Server
When running remotely on Huggingface, the code needs to access your remote webcam. It does this using the streamlit-webrtc module but requires a Twilio account to be established and the credentials uploaded to the Huggingface space.
How to Create a Free Twilio Account and Add Credentials to Huggingface Space Settings
Step 1: Create a Free Twilio Account
- Go to the Twilio Sign-Up Page.
- Fill in your details to create a new account.
- Verify your email address and phone number.
- After verification, log in to your Twilio dashboard.
Step 2: Obtain TWILIO_ACCOUNT_SID
and TWILIO_AUTH_TOKEN
- In the Twilio dashboard, navigate to the Console.
- Look for the Account Info section on the dashboard.
- Here, you will find your
Account SID
(referred to asTWILIO_ACCOUNT_SID
). - To obtain your
Auth Token
(referred to asTWILIO_AUTH_TOKEN
), click on the Show button next to theAuth Token
.
Step 3: Add Twilio Credentials to Huggingface Space Settings
Log in to your Huggingface account.
Navigate to your Huggingface Space where you need to add the credentials.
Go to the Settings of your Space.
In the Variables and secrets section:
- Click on the New variable button to add
TWILIO_ACCOUNT_SID
:- Name:
TWILIO_ACCOUNT_SID
- Value: Copy your
Account SID
from the Twilio dashboard and paste it here.
- Name:
- Click on the New secret button to add
TWILIO_AUTH_TOKEN
:- Name:
TWILIO_AUTH_TOKEN
- Value: Copy your
Auth Token
from the Twilio dashboard and paste it here.
- Name:
- Click on the New variable button to add
Save the changes.
You have now successfully added your Twilio credentials to the Huggingface Space settings. Your application should now be able to access and use the Twilio API for WebRTC functionality.
Contributing
We welcome contributions! If you have suggestions or improvements, feel free to open an issue or submit a pull request.
License
This project is licensed under the MIT License.