Abstract:
In many real world cases a feature-based description of objects is difficult and for this reason the use of the graph-based representation has become popular, thanks to the ability to effectively characterizing data.
Learning models for detecting and classifying object categories is a challenging problem in machine vision, above all when objects are not described in a vectorial manner. Measuring their structural similarity, as well as characterizing a set of graphs via a representative, are only some of the several hurdles.
A novel technique to classify objects abstracted in structured manner by mean of a generative model is presented in this research work.
The spectral approach allows to look at graphs as clouds of points, in a multidimensional space, and makes easier the application of statistical tools and concepts, in particular the probability density function.
A dual generative model is developed taking into account both the eigenvector and eigenvalue's part, from graphs' eigendecomposition.
The eigenvector generative model and the related prediction phase, take advantage of a nonparametric technique, i.e. the kernel density estimator, whilst the eigenvalue learning phase is based on a classical parametric approach. As eigenvectors are sign-ambiguous, namely eigenvectors are recovered up to a sign factor +/- 1, a new method to correct their direction is proposed and a further alignment stage by matrix rotation is described.
Eventually either spectral components are merged
and used for the ultimate aim, that is the classification of out-of-sample graphs.