Abstract:
Industrial 3D scanners based on laser lines and point clouds show their limits when it comes to point cloud triangulation to retrieve the object mesh. In this thesis, a novel approach is proposed to overcome those limitations by training a Multi Layer Perceptron as an implicit neural representation of the scanned object volume. The training is performed by sampling points from pictures of the object and classifying those points as internal or external accordingly to their position with respect to the laser edge projected on the object itself.
The resulting neural network is a function that maps 3D points to a volumetric density. Then, thanks to algorithms like Marching Cubes it is possible to dynamically generate the mesh of the original object. This approach would lead to an enhancement of the measures that can be taken on the mesh and their precision, thanks to the properties of the implicit neural representation.