Overview: This project focuses on 3D scene reconstruction for autonomous vehicle perception, leveraging the Vectorized Kernel Mixture (VecKM) method. VecKM, developed by Dehao Yuan, provides high efficiency, noise robustness, and superior local geometry encoding, making it ideal for processing LiDAR point cloud data. The project reconstructs autonomous vehicle environments from raw point clouds using deep learning-based feature extraction and kernel mixture encodings.
The model was evaluated using PCPNet and CARLA-simulated LiDAR data, demonstrating superior normal estimation accuracy, reduced computational costs, and improved perception for navigation in autonomous driving.
GitHub Repository: View Source Code on GitHub
Key Features:
Results & Impact
Achieved high normal estimation accuracy on PCPNet dataset, validating VecKM’s robust feature extraction capabilities. Successfully reconstructed 3D surfaces from LiDAR point clouds using Poisson Surface Reconstruction, aligning closely with ground truth. Demonstrated real-time feasibility by integrating VecKM with CARLA-simulated LiDAR data, paving the way for real-world applications in self-driving vehicles.
Future Work:
Technologies Used:
This project enhances 3D environment perception for autonomous vehicles, setting new benchmarks in efficient, noise-robust LiDAR-based scene reconstruction.
Data collection in the CARLA simulation environment using an autonomous agent (car)
Below is the full project report. You can view it directly here or download it.