The Dataset Viewer has been disabled on this dataset.
Dataset Overview
- The dataset is designed to support the development of machine learning models for detecting daily activities, violence, and fall down scenarios from combined audio and video sources.
- The preprocessing pipeline leverages audio feature extraction, human keypoint detection, and relative positional encoding to generate a unified representation for training and inference.
- Classes:
- 0: Daily - Normal indoor activities
- 1: Violence - Aggressive behaviors
- 2: Fall Down - Sudden falls or collapses
- Data Format:
- Stored as
.npy
files for efficient loading and processing. - Each
.npy
file is a tensor containing concatenated audio and video feature representations for a fixed sequence of frames.
- Stored as
- Classes:
- Data preprocessing code: GitHub data-preprocessing
Dataset Preprocessing Pipeline
- The dataset preprocessing consists of a multi-step pipeline to extract and align audio features and video keypoints. Below is a detailed explanation of each step:
Step 1: Audio Processing
- WAV File Extraction:
- Audio is extracted from the original video files in WAV format.
- Frame Splitting:
- The audio signal is divided into 1/30-second segments to synchronize with video frames.
- MFCC Feature Extraction:
- Mel-Frequency Cepstral Coefficients (MFCC) are computed for each audio segment.
- Each MFCC output has a shape of 13 x m, where m represents the number of frames in the audio segment.
Step 2: Video Processing
- YOLO Object Detection:
- Detects up to 3 individuals in each video frame using the YOLO model.
- Outputs bounding boxes for detected individuals.
- MediaPipe Keypoint Extraction:
- For each detected individual, MediaPipe extracts 33 keypoints, each represented as (x, y, z, visibility), where:
- x, y, z : Spatial coordinates.
- visibility : Confidence score for the detected keypoint.
- For each detected individual, MediaPipe extracts 33 keypoints, each represented as (x, y, z, visibility), where:
- Keypoint Filtering:
- Keypoints 1, 2, and 3 (eyebrow-specific) are excluded.
- Keypoints are further filtered by visibility threshold(0.5) to ensure reliable data.
- Visibility property is excluded in further calculations.
- Relative Positional Encoding:
- For the remaining 30 keypoints, relative positions of the 10 most important keypoints are computed.
- These relative positions are added as additional features to improve context-aware modeling.
- Feature Dimensionality Adjustment:
- The output is reshaped to (n, 30*3 + 30, 3), where n is the number of frames.
Step 3: Audio-Video Feature Concatenation
- Expansion:
- Video keypoints are expanded to match the audio feature dimensions, resulting in a tensor of shape (1, 1, 4).
- Concatenation:
- Audio (13) and video (12) features are concatenated along the feature axis.
- The final representation has a shape of (n, 120, 13+12), where n is the number of frames.
Data Storage
- The final processed data is saved as
.npy
files, organized into three folders:0_daily/
: Contains data representing normal daily activities.1_violence/
: Contains data representing violent scenarios.2_fall_down/
: Contains data representing falling events.
Dataset Descriptions
This dataset provides a comprehensive representation of synchronized audio and video features for real-time activity recognition tasks.
The combination of MFCC audio features and MediaPipe keypoints ensures the model can accurately detect and differentiate between the defined activity classes.
Key Features:
- Multimodal Representation:
- Audio and video modalities are fused into a single representation to capture both sound and motion dynamics.
- Efficient Format:
- The
.npy
format ensures fast loading and processing, suitable for large-scale training.
- The
- Real-World Applications:
- Designed for safety systems, healthcare monitoring, and smart home applications.
- Adaptation in
SilverAssistant
project: HuggingFace Silver-Multimodal Model
- Multimodal Representation:
This dataset enables the development of robust multimodal models for detecting critical situations with high accuracy and efficiency.
Data Sources
- Source 1: μλμ΄ μ΄μνλ μμ AI Hub
- Source 2: μ΄μνλ cctv μμ AI Hub
- Source 3: λ©ν°λͺ¨λ¬ μμ
- Downloads last month
- 216