Topic: Manifold Learning
A. Summary
A. Summary
Many areas of science depend on exploratory data analysis and visualization. The need to analyze large amounts of multivariate data raises the fundamental problem of dimensionality reduction: how to discover compact representations of high-dimensional data. Here,we introduce locally linear embedding (LLE), an unsupervised learning algorithm that computes low-dimensional, neighbor-hood-preserving embeddings of high-dimensional inputs. Unlike clustering methods for local dimensionality reduction, LLE maps its inputs into a single global coordinate system of lower dimensionality, and its optimizations do not involve local minima. By exploiting the local symmetries of linear reconstructions, LLE is able to learn the global structure of nonlinear manifolds, such as those generated by images of faces or documents of text.
B. Note
Steps of locally linear embedding:
Steps of locally linear embedding:
(1) Assign neighbors to each data point Xi (for example by using the K nearest neighbors).
(2) Compute the weights Wij that best linearly reconstruct Xi from its neighbors, solving the constrained least-squares problem in Eq. 1.
(3) Compute the low-dimensional embedding vectors Yi best reconstructed by Wij , minimizing Eq. 2 by finding the smallest eigenmodes of the sparse symmetric matrix in Eq. 3. Although the weights Wij and vectors Yi are computed by methods in linear algebra, the constraint that points are only reconstructed from neighbors can result in highly nonlinear embeddings.
張貼留言