Image Segmentation
PyTorch
suwesh commited on
Commit
05c2f3a
·
verified ·
1 Parent(s): e41cea8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -1
README.md CHANGED
@@ -2,5 +2,10 @@
2
  license: osl-3.0
3
  ---
4
  Abstract:
5
- Autonomous driving when applied for high-speed racing aside from urban environments presents challenges in scene understanding due to rapid changes in the track environment. Traditional sequential network approaches might struggle to keep up with the real-time knowledge and decision-making demands of an autonomous agent which covers large displacements in a short time. This paper proposes a novel baseline architecture for developing sophisticated models with the ability of true hardware-enabled parallelism to achieve neural processing speeds to mirror the agent's high velocity. The proposed model, named Parallel Perception Network (PPN) consists of two independent neural networks, a segmentation and a reconstruction network running in parallel on separate accelerated hardware. The model takes raw 3D point cloud data from the LiDAR sensor as input and converts them into a 2D Bird's Eye View Map on both devices. Each network extracts its input features along space and time dimensions independently and produces outputs in parallel. Our model is trained on a system with 2 NVIDIA T4 GPUs with a combination of loss functions including edge preservation, and shows a 1.8x speed up in model inference time compared to a sequential configuration.
 
 
 
 
 
6
  Implementation is available at: https://github.com/suwesh/Parallel-Perception-Network.
 
2
  license: osl-3.0
3
  ---
4
  Abstract:
5
+ README
6
+ Parallel-Perception-Network
7
+ Parallel Neural Computing for Scene Understanding from LiDAR Perception in Autonomous Racing
8
+
9
+ Abstract:
10
+ Autonomous driving in high-speed racing, as opposed to urban environments, presents significant challenges in scene understanding due to rapid changes in the track environment. Traditional sequential network approaches may struggle to meet the real-time knowledge and decision-making demands of an autonomous agent covering large displacements in a short time. This paper proposes a novel baseline architecture for developing sophisticated models capable of true hardware-enabled parallelism, achieving neural processing speeds that mirror the agent’s high velocity. The proposed model (Parallel Perception Network (PPN)) consists of two independent neural networks, segmentation and reconstruction networks, running parallelly on separate accelerated hardware. The model takes raw 3D point cloud data from the LiDAR sensor as input and converts it into a 2D Bird’s Eye View Map on both devices. Each network independently extracts its input features along space and time dimensions and produces outputs parallelly. The proposed method’s model is trained on a system with two NVIDIA T4 GPUs, using a combination of loss functions, including edge preservation, and demonstrates a 2x speedup in model inference time compared to a sequential configuration.
11
  Implementation is available at: https://github.com/suwesh/Parallel-Perception-Network.