Recently, graph-based convolutional neural network (GCNNs) methods have been very successful in learning point cloud representations tasks. For instance, convolutional neural networks cannot be applied directly on point clouds to learn feature representations because Convolution operation requires the data to follow an order and exist on a structured grid. Therefore, it is challenging to apply conventional deep learning methods directly on the raw outputs. Furthermore, it is also unstructured, which means that the distance between neighboring points is not always fixed. The raw output of these sensors, especially LiDAR scanners, contains a large amount of missing data. Real-world 3D data is the output of various 3D sensors such as LiDAR scanners and RGB-D cameras. Based on our results, embeddings generated from the contrastive AutoEncoder enhances shape completion and classification performance from 84.2 % to 84.9 % of point clouds achieving the state-of-the-art results with 10 classes. We also extend the number of classes for evaluation from 4 to 10 to show the generalization ability of the learned features. To evaluate the performance of our approach, we utilize the PointNet classifier. Furthermore, local feature representations learning of point cloud is performed by adding the Chamfer distance function. It is performed by optimizing triplet loss. Our idea is to add contrastive learning into AutoEncoders to encourage global feature learning of the point cloud classes. Most 3D shape completion pipelines utilize AutoEncoders to extract features from point clouds used in downstream tasks such as classification, segmentation, detection, and other related applications. In this paper, we present the idea of Self Supervised learning on the shape completion and classification of point clouds.
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |