Clustering
Clustering is the assignment of a set of observations into subsets (called clusters) so that observations in the same cluster are similar in some sense. Clustering is a method of unsupervised learning, and a common technique for statistical data analysis used in many fields.
Hierarchical algorithms find successive clusters using previously established clusters. These algorithms usually are either agglomerative ("bottom-up") or divisive ("top-down"). Agglomerative algorithms begin with each element as a separate cluster and merge them into successively larger clusters. Divisive algorithms begin with the whole set and proceed to divide it into successively smaller clusters.
Partitional algorithms typically determine all clusters at once, but can also be used as divisive algorithms in the hierarchical clustering. Many partitional clustering algorithms require the specification of the number of clusters to produce in the input data set, prior to execution of the algorithm. Barring knowledge of the proper value beforehand, the appropriate value must be determined, a problem on its own for which a number of techniques have been developed.
Density-based clustering algorithms are devised to discover arbitrary-shaped clusters. In this approach, a cluster is regarded as a region in which the density of data objects exceeds a threshold.
Subspace clustering methods look for clusters that can only be seen in a particular projection (subspace, manifold) of the data. These methods thus can ignore irrelevant attributes. The general problem is also known as Correlation clustering while the special case of axis-parallel subspaces is also known as two-way clustering, co-clustering or biclustering in bioinformatics: in these methods not only the objects are clustered but also the features of the objects, i.e., if the data is represented in a data matrix, the rows and columns are clustered simultaneously. They usually do not however work with arbitrary feature combinations as in general subspace methods.
Agglomerative Hierarchical Clustering
Agglomerative hierarchical clustering seeks to build a hierarchy of clusters in a bottom up approach: each observation starts in its own cluster, and pairs of clusters are merged as one moves up the hierarchy. The results of hierarchical clustering are usually presented in a dendrogram.
In general, the merges are determined in a greedy manner. In order to decide which clusters should be combined, a measure of dissimilarity between sets of observations is required. In most methods of hierarchical clustering, this is achieved by use of an appropriate metric, and a linkage criteria which specifies the dissimilarity of sets as a function of the pairwise distances of observations in the sets. Hierarchical clustering has the distinct advantage that any valid measure of distance can be used.
def hclust(data: Array[Array[Double]], method: String): HierarchicalClustering
def hclust[T](data: Array[T], distance: Distance[T], method: String): HierarchicalClustering
public class HierarchicalClustering {
public static HierarchicalClustering fit(Linkage linkage);
}
fun hclust(data: Array<DoubleArray>, method: String): HierarchicalClustering
fun <T> hclust(data: Array<T>, distance: Distance<T>, method: String): HierarchicalClustering
The parameter method
specifies the agglomeration method to
merge clusters. This should be one of "single", "complete",
"upgma"/"average", "upgmc"/"centroid", "wpgma", "wpgmc"/"median",
and "ward".
The single linkage defines the distance between groups as the distance between the closest pair of objects, one from each group. A drawback of this method is the so-called chaining phenomenon: clusters may be forced together due to single elements being close to each other, even though many of the elements in each cluster may be very distant to each other.
Single linkage clustering is essentially the same as Kruskal's algorithm for minimum spanning trees. However, in single linkage clustering, the order in which clusters are formed is important, while for minimum spanning trees what matters is the set of pairs of points that form distances chosen by the algorithm.
The complete linkage is the opposite of single linkage. Distance between groups is now defined as the distance between the most distant pair of objects, one from each group.
UPGMA (Unweighted Pair Group Method with Arithmetic mean, also known as average linkage) defines the distance between two clusters as the mean distance between all possible pairs of nodes in the two clusters.
In bioinformatics, UPGMA is used for the creation of phenetic trees (phenograms). UPGMA assumes a constant rate of evolution (molecular clock hypothesis), and is not a well-regarded method for inferring relationships unless this assumption has been tested and justified for the data set being used.
UPGMC (Unweighted Pair Group Method using Centroids, also known as centroid linkage) defines the distance between two clusters as the Euclidean distance between their centroids, as calculated by arithmetic mean. Only valid for Euclidean distance based proximity matrix.
WPGMA (Weighted Pair Group Method with Arithmetic mean) down-weights the largest group by giving equal weights to the two branches of the dendrogram that are about to fuse.
Note that the terms weighted and unweighted refer to the final result, not the math by which it is achieved. Thus, the simple averaging in WPGMA produces a weighted result, and the proportional averaging in UPGMA produces an unweighted result.
WPGMC (Weighted Pair Group Method using Centroids, also known as median linkage) defines the distance between two clusters as the Euclidean distance between their weighted centroids. Only valid for Euclidean distance based proximity matrix.
Ward's linkage. Ward's linkage follows the analysis of variance approach The dissimilarity between two clusters is computed as the increase in the "error sum of squares" (ESS) after fusing two clusters into a single cluster. Ward's Method seeks to choose the successive clustering steps to minimize the increase in ESS at each step. Note that it is only valid for Euclidean distance based proximity matrix.
To visualize the clustering results, we apply hierarchical clustering to 2d data in the following. The data is generated from six Gaussian distributions, each 300 samples.
val x = read.csv("data/clustering/rem.txt", header=false, delimiter=" ").toArray()
val clusters = hclust(x, "complete")
show(dendrogram(clusters))
import smile.clustering.*;
import smile.clustering.linkage.*;
var x = Read.csv("data/clustering/rem.txt", CSVFormat.DEFAULT.withDelimiter(' ')).toArray();
var clusters = HierarchicalClustering.fit(CompleteLinkage.of(x));
var plot = new Dendrogram(clusters.tree(), clusters.height());
plot.canvas().window();
import smile.*;
import smile.clustering.*;
import smile.plot.swing.*;
import java.awt.Color;
val x = read.csv("data/clustering/rem.txt", header=false, delimiter=' ').toArray();
val clusters = hclust(x, "complete");
val plot = Dendrogram(clusters.tree(), clusters.height());
plot.canvas().window();
The clustering results can be visualized by a hendroagram, which is a tree diagram to illustrate the arrangement of the clusters produced by hierarchical clustering.
If a hard partition is need, we can cut a hierarchical clustering tree into several groups by specifying the desired number or the cut height.
val y = clusters.partition(6)
show(plot(x, y, '.'))
var y = clusters.partition(6);
ScatterPlot.of(x, y, '.').canvas().window();
val y = clusters.partition(6)
ScatterPlot.of(x, y, '.').canvas().window();
The partitioning of six clusters is shown as
K-Means
K-Means clustering partitions n
observations into k
clusters in which
each observation belongs to the cluster with the nearest mean.
Although finding an exact solution to the K-Means problem for arbitrary
input is NP-hard, the standard approach to finding an approximate solution
(often called Lloyd's algorithm or the K-Means algorithm) is used widely
and frequently finds reasonable solutions quickly.
K-Means is a hard clustering method, i.e. each sample is assigned to a specific cluster. In contrast, soft clustering, e.g. the Expectation-Maximization algorithm for Gaussian mixtures, assign samples to different clusters with different probabilities.
The K-Means algorithm has at least two major theoretic shortcomings:
- First, it has been shown that the worst case running time of the algorithm is super-polynomial in the input size.
- Second, the approximation found can be arbitrarily bad with respect to the objective function compared to the optimal learn.
In Smile, we use K-Means++ which addresses the second of these obstacles by specifying a procedure to initialize the cluster centers before proceeding with the standard K-Means optimization iterations. With the K-Means++ initialization, the algorithm is guaranteed to find a solution that is O(log k) competitive to the optimal K-Means solution.
We also use K-D trees to speed up each K-Means step as described in the filter algorithm by Kanungo, et al.
def kmeans(data: Array[Array[Double]], k: Int, maxIter: Int = 100, tol: Double = 1E-4, runs: Int = 10): KMeans
public class KMeans {
public static KMeans fit(double[][] data, int k, int maxIter, double tol);
}
fun kmeans(data: Array<DoubleArray>, k: Int, maxIter: Int = 100, tol: Double = 1E-4, runs: Int = 10): KMeans
The parameter maxIter
specifies the maximum number of iterations.
If the output of K-Means is used to initialize other algorithms, a small number (says 20)
is usually sufficient. In practice, we often run the K-Means multiple times
and choose the best one. To do that, set the parameter runs > 1
(e.g. 10 ~ 20).
val clusters = kmeans(x, 6, runs = 20)
show(plot(x, clusters.y, '.'))
var clusters = PartitionClustering.run(20, () -> KMeans.fit(x, 6));
ScatterPlot.of(x, clusters.y, '.').canvas().window();
val clusters = kmeans(x, 6, runs = 20)
ScatterPlot.of(x, clusters.y, '.').canvas().window()
K-Means works very well on Gaussian mixtures.
If the clusters are elongated, however, the results may be far from optimal.
val x = read.csv("data/clustering/elongate.txt", header=false, delimiter="\t").toArray()
val clusters = kmeans(x, 2, runs = 20)
show(plot(x, clusters.y, '.'))
var x = Read.csv("data/clustering/elongate.txt", CSVFormat.DEFAULT.withDelimiter('\t')).toArray();
var clusters = PartitionClustering.run(20, () -> KMeans.fit(x, 2));
ScatterPlot.of(x, clusters.y, '.').canvas().window();
val x = read.csv("data/clustering/elongate.txt", header=false, delimiter='\t').toArray()
val clusters = kmeans(x, 2, runs = 20)
ScatterPlot.of(x, clusters.y, '.').canvas().window()
In K-Means, the number of clusters K
has to be supplied by the user.
However, the appropriate number of clusters is often unknown in practice.
Several approaches (e.g. X-Means, G-Means, deterministic annealing, etc.)
have been proposed to handle this challenge.
X-Means
X-Means clustering algorithm is an extended K-Means which tries to automatically determine the number of clusters based on BIC scores. Starting with only one cluster, the X-Means algorithm goes into action after each run of K-Means, making local decisions about which subset of the current centroids should split themselves in order to better fit the data. The splitting decision is done by computing the Bayesian Information Criterion (BIC).
def xmeans(data: Array[Array[Double]], k: Int = 100): XMeans
public class XMeans {
public static XMeans fit(double[][] data, int kmax, int maxIter, double tol);
}
fun xmeans(data: Array<DoubleArray>, k: Int = 100): XMeans
where the parameter k
is the maximum number of clusters
val x = read.csv("data/clustering/rem.txt", header=false, delimiter=" ").toArray()
val clusters = xmeans(x, 50)
show(plot(x, clusters.y, '.'))
var x = Read.csv("data/clustering/rem.txt", CSVFormat.DEFAULT.withDelimiter(' ')).toArray();
var clusters = XMeans.fit(x, 50);
ScatterPlot.of(x, clusters.y, '.').canvas().window();
val x = read.csv("data/clustering/rem.txt", header=false, delimiter=' ').toArray()
val clusters = xmeans(x, 50)
ScatterPlot.of(x, clusters.y, '.').canvas().window()
G-Means
G-Means clustering algorithm is another extended K-Means which tries to automatically determine the number of clusters by normality test. The G-Means algorithm is based on a statistical test for the hypothesis that a subset of data follows a Gaussian distribution. G-Means runs K-Means with increasing k in a hierarchical fashion until the test accepts the hypothesis that the data assigned to each K-Means center are Gaussian.
def gmeans(data: Array[Array[Double]], k: Int = 100): GMeans
public class GMeans {
public static GMeans fit(double[][] data, int kmax, int maxIter, double tol);
}
fun gmeans(data: Array<DoubleArray>, k: Int = 100): GMeans
where the parameter k
is the maximum number of clusters
val clusters = gmeans(x, 50)
show(plot(x, clusters.y, '.'))
var clusters = GMeans.fit(x, 50);
ScatterPlot.of(x, clusters.y, '.').canvas().window();
val clusters = gmeans(x, 50)
ScatterPlot.of(x, clusters.y, '.').canvas().window()
Neither X-Means nor G-Means works well on the elongate data. Both report only one cluster.
Deterministic Annealing Clustering
The observation of annealing processes in physical chemistry motivated the use of similar concepts to avoid local minima of the optimization cost. Certain chemical systems can be driven to their low-energy states by annealing, which is a gradual reduction of temperature, spending a long time at the vicinity of the phase transition points. In the corresponding probabilistic framework, a Gibbs distribution is defined over the set of all possible configurations which assigns higher probability to configurations of lower energy. This distribution is parameterized by the temperature, and as the temperature is lowered it becomes more discriminating (concentrating most of the probability in a smaller subset of low-energy configurations). At the limit of low temperature it assigns nonzero probability only to global minimum configurations.
A known technique for nonconvex optimization that
capitalizes on this physical analogy is simulated annealing based on
the Metropolis algorithm. A sequence of random
moves is generated and the random decision to accept a move depends
on the cost of the resulting configuration relative to that of the
current state. However, one must be very careful with the annealing
schedule, i.e., the rate at which the temperature is lowered.
In theory, the global minimum can be achieved if the schedule obeys
T ∝ 1 / log n
, where n
is the number
of the current iteration. Such schedules are not realistic in many applications.
It was shown that perturbations of infinite variance (e.g., the Cauchy distribution)
provide better ability to escape from minima and allow, in principle,
the use of faster schedules.
Deterministic annealing tries to enjoy the best of both worlds. On the one hand it is deterministic, meaning that we do not want to be wandering randomly on the energy surface while making incremental progress on the average, as is the case for simulated annealing. On the other hand, it is still an annealing method and aims at the global minimum, instead of getting greedily attracted to a nearby local minimum. One can view deterministic annealing as replacing stochastic simulations by the use of expectation. An effective energy function, which is parameterized by a (pseudo) temperature, is derived through expectation and is deterministically optimized at successively reduced temperatures.
Deterministic annealing clustering is based on principles of information theory and probability theory, and it consists of minimizing the clustering cost at prescribed levels of randomness. The method provides soft clustering solutions at different scales, where the scale is directly related to the temperature parameter. For each temperature value, the algorithm iterates between the calculation of all posteriori probabilities and the update of the centroids vectors, until convergence is reached. There are "phase transitions" in the design process, where phases correspond to the number of effective clusters in the solution, which grows via splits as the temperature is lowered. The annealing starts with a high temperature. Here, all centroids vectors converge to the center of the pattern distribution (independent of their initial positions). Below a critical temperature the vectors start to split. Further decreasing the temperature leads to more splittings until all centroids vectors are separate. The annealing can therefore avoid (if it is sufficiently slow) the convergence to local minima. If a limitation on the number of clusters is imposed, then at zero temperature a hard clustering solution is obtained.
def dac(data: Array[Array[Double]], k: Int, alpha: Double): DeterministicAnnealing
public class DeterministicAnnealing {
public static DeterministicAnnealing fit(double[][] data, int Kmax, double alpha, int maxIter, double tol, double splitTol);
}
fun dac(data: Array<DoubleArray>, k: Int, alpha: Double): DeterministicAnnealing
where k
is the maximum number of clusters, and alpha
is the annealing control parameter in (0, 1). The temperature T
is decreasing as Ti+1 = alpha * Ti
.
smile> dac(x, 12, 0.9)
res58: DeterministicAnnealing = Cluster distortion: 17904.72351
Cluster size of 300 data points:
Cluster 1 157 (52.3%)
Cluster 2 42 (14.0%)
Cluster 3 3 ( 1.0%)
Cluster 4 2 ( 0.7%)
Cluster 5 3 ( 1.0%)
Cluster 6 0 ( 0.0%)
Cluster 7 0 ( 0.0%)
Cluster 8 0 ( 0.0%)
Cluster 9 84 (28.0%)
Cluster 10 2 ( 0.7%)
Cluster 11 2 ( 0.7%)
Cluster 12 5 ( 1.7%)
smile> DeterministicAnnealing.fit(x, 12, 0.9, 100, 1E-4, 1E-2)
$126 ==> Cluster distortion: 2862.52100
Cluster size of 1800 data points:
Cluster 1 297 (16.5%)
Cluster 2 105 ( 5.8%)
Cluster 3 159 ( 8.8%)
Cluster 4 137 ( 7.6%)
Cluster 5 139 ( 7.7%)
Cluster 6 21 ( 1.2%)
Cluster 7 292 (16.2%)
Cluster 8 149 ( 8.3%)
Cluster 9 140 ( 7.8%)
Cluster 10 59 ( 3.3%)
Cluster 11 143 ( 7.9%)
Cluster 12 159 ( 8.8%)
>>> dac(x, 12, 0.9)
res22: smile.clustering.DeterministicAnnealing = Cluster distortion: 2862.52100
Cluster size of 1800 data points:
Cluster 1 297 (16.5%)
Cluster 2 105 ( 5.8%)
Cluster 3 159 ( 8.8%)
Cluster 4 137 ( 7.6%)
Cluster 5 139 ( 7.7%)
Cluster 6 21 ( 1.2%)
Cluster 7 292 (16.2%)
Cluster 8 149 ( 8.3%)
Cluster 9 140 ( 7.8%)
Cluster 10 59 ( 3.3%)
Cluster 11 143 ( 7.9%)
Cluster 12 159 ( 8.8%)
Note that we set k = 12
in the example although we know there are 6 clusters.
It is because we maintain two codevectors/centroids for each cluster for sake of split.
The algorithm correctly figures out that half of them are ghost clusters without samples.
In the output summary, the first column is the cluster id, the second column is the size
of clusters, and the third column is the percentage of samples.
Although deterministic annealing is physical and mathematical sound, the results may not reveal the correct structure of data as shown in the above.
Sequential Information Bottleneck
The Sequential Information Bottleneck (SIB) algorithm clusters co-occurrence data such as text documents vs words. SIB is guaranteed to converge to a local maximum of the information. Moreover, the time and space complexity are significantly improved in contrast to the agglomerative IB algorithm.
In analogy to K-Means, SIB's update formulas are essentially same as the EM algorithm for estimating finite Gaussian mixture model by replacing regular Euclidean distance with Kullback-Leibler divergence, which is clearly a better dissimilarity measure for co-occurrence data. However, the common batch updating rule (assigning all instances to nearest centroids and then updating centroids) of K-Means won't work in SIB, which has to work in a sequential way (reassigning (if better) each instance then immediately update related centroids). It might be because K-L divergence is very sensitive and the centroids may be significantly changed in each iteration in batch updating rule.
Note that this implementation has a little difference from the original paper, in which a weighted Jensen-Shannon divergence is employed as a criterion to assign a randomly-picked sample to a different cluster. However, this doesn't work well in some cases as we experienced probably because the weighted JS divergence gives too much weight to clusters which is much larger than a single sample. In this implementation, we instead use the regular/unweighted Jensen-Shannon divergence.
def sib(data: Array[SparseArray], k: Int, maxIter: Int = 100, runs: Int = 1): SIB
public class SIB {
public static SIB fit(SparseArray[] data, int k, int maxIter);
}
fun sib(data: Array<SparseArray>, k: Int, maxIter: Int = 100, runs: Int = 1): SIB
The news data in data/libsvm/news20.dat
is a very sparse data
of dimension 62061. The data contains 15935 samples. The below example
clusters it into 20 clusters.
val data = read.libsvm("data/libsvm/news20.dat")
val sparse = (0 until data.size).map(i => data(i).x).toSeq
val clusters = sib(sparse.toArray, 20, 100)
var data = Read.libsvm("data/libsvm/news20.dat");
var sparse = data.stream().map(i -> i.x()).toArray(SparseArray[]::new);
var clusters = SIB.fit(sparse, 20, 100);
val data = read.libsvm("data/libsvm/news20.dat")
val sparse = data.map { it.x }
val clusters = sib(sparse, 20, 100)
CLARANS
The K-Medoids algorithm is an adaptation of the k-means algorithm. Rather than calculate the mean of the items in each cluster, a representative item, or medoid, is chosen for each cluster at each iteration. The K-Medoids algorithm attempts to minimize the distance between points labeled to be in a cluster and the medoid of that cluster. So a medoid is a most centrally located point in the cluster. K-Medoids works with an arbitrary matrix of distances between data points instead of L2. It is also more robust to noise and outliers as compared to K-Means.
The most common realisation of K-Medoids clustering is the Partitioning Around Medoids (PAM) algorithm. PAM uses a greedy search which may not find the optimum solution, but it is faster than exhaustive search.
CLARANS (Clustering Large Applications based upon RANdomized Search) is a more
efficient medoid-based clustering algorithm. In CLARANS, the process of finding
k
medoids from n
objects is viewed abstractly
as searching through a certain graph. In the graph, a node is represented
by a set of k
objects as selected medoids. Two
nodes are neighbors if their sets differ by only one object. In each iteration,
CLARANS considers a set of randomly chosen neighbor nodes as candidate
of new medoids. We will move to the neighbor node if the neighbor
is a better choice for medoids. Otherwise, a local optima is discovered. The
entire process is repeated multiple time to find better.
def clarans[T](data: Array[T], distance: Distance[T], k: Int, maxNeighbor: Int, numLocal: Int): CLARANS[T]
public class CLARANS {
public static CLARANS<T> fit(T[] data, int k, int maxNeighbor, Distance<T> distance);
}
fun <T> clarans(data: Array<T>, distance: Distance<T>, k: Int, maxNeighbor: Int, numLocal: Int): CLARANS<T>
The parameter maxNeighbor
specifies the maximum number of
neighbors examined. The higher the value of maxNeighbor, the closer is
CLARANS to PAM, and the longer is each search of a local minima. But
the quality of such a local minima is higher and fewer local minima
needs to be obtained.
val clusters = clarans(x, new EuclideanDistance(), 6, 10, 20)
show(plot(x, clusters.y, '.'))
var clusters = PartitionClustering.run(20, () -> CLARANS.fit(x, new EuclideanDistance(), 6, 10));
ScatterPlot.of(x, clusters.y, '.').canvas().window();
import smile.math.distance.*;
val clusters = clarans(x, EuclideanDistance(), 6, 10, 20)
ScatterPlot.of(x, clusters.y, '.').canvas().window()
The above clustering partition is achieved with maxNeighbor = 10
and numLocal = 20
.
DBSCAN
DBSCAN (Density-Based Spatial Clustering of Applications with Noise) finds a number of clusters starting from the estimated density distribution of corresponding nodes.
// DBSCAN with a customized data structure for neighborhood search
def dbscan[T](data: Array[T], nns: RNNSearch[T, T], minPts: Int, radius: Double): DBSCAN[T]
def dbscan[T](data: Array[T], distance: Metric[T], minPts: Int, radius: Double): DBSCAN[T]
// DBSCAN with Euclidean distance
def dbscan(data: Array[Array[Double]], minPts: Int, radius: Double): DBSCAN[Array[Double]]
public class DBSCAN {
public static DBSCAN<double[]> fit(double[][] data, int minPts, double radius);
public static DBSCAN<T> fit(T[] data, Distance<T> distance, int minPts, double radius);
public static DBSCAN<T> fit(T[] data, RNNSearch<T, T> nns, int minPts, double radius);
}
// DBSCAN with a customized data structure for neighborhood search
fun <T> dbscan(data: Array<T>, nns: RNNSearch<T, T>, minPts: Int, radius: Double): DBSCAN<T>
fun <T> dbscan(data: Array<T>, distance: Metric<T>, minPts: Int, radius: Double): DBSCAN<T>
// DBSCAN with Euclidean distance
fun dbscan(data: Array<DoubleArray>, minPts: Int, radius: Double): DBSCAN<DoubleArray>
DBSCAN requires two parameters: radius
(i.e. neighborhood radius) and the
number of minimum points required to form a cluster (minPts
). It starts
with an arbitrary starting point that has not been visited. This point's
neighborhood is retrieved, and if it contains sufficient number of points,
a cluster is started. Otherwise, the point is labeled as noise. Note that
this point might later be found in a sufficiently sized radius-environment
of a different point and hence be made part of a cluster.
If a point is found to be part of a cluster, its neighborhood is also part of that cluster. Hence, all points that are found within the neighborhood are added, as is their own neighborhood. This process continues until the cluster is completely found. Then, a new unvisited point is retrieved and processed, leading to the discovery of a further cluster of noise.
DBSCAN visits each point of the database, possibly multiple times (e.g.,
as candidates to different clusters). For practical considerations, however,
the time complexity is mostly governed by the number of nearest neighbor
queries. DBSCAN executes exactly one such query for each point, and if
an indexing structure is used that executes such a neighborhood query
in O(log n)
, an overall runtime complexity of O(n log n)
is obtained.
val x = read.csv("data/clustering/chameleon/t4.8k.txt", header=false, delimiter=" ").toArray()
val clusters = dbscan(x, 20, 10)
plot(x, clusters.y, '.')
var x = Read.csv("data/clustering/chameleon/t4.8k.txt", CSVFormat.DEFAULT.withDelimiter(' ')).toArray();
var clusters = DBSCAN.fit(x, 20, 10);
ScatterPlot.of(x, clusters.y, '.').canvas().window();
val x = read.csv("data/clustering/chameleon/t4.8k.txt", header=false, delimiter=' ').toArray()
val clusters = dbscan(x, 20, 10.0)
ScatterPlot.of(x, clusters.y, '.').canvas().window()
The chameleon is a set of complicated spatial data of arbitrary cluster shapes. With appropriate parameters, DBSCAN can discover the correct clusters and also identify outliers.
DBSCAN has many advantages such as
- DBSCAN does not need to know the number of clusters in the data a priori, as opposed to K-Means.
- DBSCAN can find arbitrarily shaped clusters. It can even find clusters
completely surrounded by (but not connected to) a different cluster.
Due to the
MinPts
parameter, the so-called single-link effect (different clusters being connected by a thin line of points) is reduced. - DBSCAN has a notion of noise. Outliers are labeled as
Clustering.OUTLIER
, which isInteger.MAX_VALUE
. - DBSCAN requires just two parameters and is mostly insensitive to the ordering of the points in the database. (Only points sitting on the edge of two different clusters might swap cluster membership if the ordering of the points is changed, and the cluster assignment is unique only up to isomorphism.)
On the other hand, DBSCAN has the disadvantages of
- In high dimensional space, the data are sparse everywhere because of the curse of dimensionality. Therefore, DBSCAN doesn't work well on high-dimensional data in general.
- DBSCAN does not respond well to data sets with varying densities.
DENCLUE
DENCLUE (DENsity CLUstering) employs a cluster model based on kernel density estimation. A cluster is defined by a local maximum of the estimated density function. Data points going to the same local maximum are put into the same cluster. DENCLUE works efficiently for high-dimensional data sets and allows arbitrary noise levels while still guaranteeing to find the clustering.
def denclue(data: Array[Array[Double]], sigma: Double, m: Int): DENCLUE
public class DENCLUE {
public static DENCLUE fit(double[][] data, double sigma, int m, double tol, int minPts);
}
fun denclue(data: Array<DoubleArray>, sigma: Double, m: Int): DENCLUE
The parameter sigma
is the smooth parameter in the Gaussian kernel.
The user can choose sigma
such that number of density attractors
is constant for a long interval of sigma
.
The parameter m
is the number of selected samples used in the iteration.
This number should be much smaller than the number of data points
to speed up the algorithm. It should also be large enough to capture
the sufficient information of underlying distribution.
Clearly, DENCLUE doesn't work on data with uniform distribution. In high dimensional space, the data always look like uniformly distributed because of the curse of dimensionality. Therefore, DENCLUDE doesn't work well on high-dimensional data in general.
val x = read.csv("data/clustering/rem.txt", header=false, delimiter=" ").toArray()
val clusters = denclue(x, 1.0, 50)
show(plot(x, clusters.y, '.'))
var x = Read.csv("data/clustering/rem.txt", CSVFormat.DEFAULT.withDelimiter(' ')).toArray();
var clusters = DENCLUE.fit(x, 1.0, 50);
ScatterPlot.of(x, clusters.y, '.').canvas().window();
val x = read.csv("data/clustering/rem.txt", header=false, delimiter=' ').toArray()
val clusters = denclue(x, 1.0, 50)
ScatterPlot.of(x, clusters.y, '.').canvas().window()
DENCLUE doesn't directly label some data as outliers. However, it may report very small clusters, which could be regarded as outliers.
Spectral Clustering
Given a set of data points, the similarity matrix may
be defined as a matrix S
where Sij
represents a measure of the
similarity between points. Spectral clustering techniques make use of the
spectrum of the similarity matrix of the data to perform dimensionality
reduction for clustering in fewer dimensions. Then the clustering will
be performed in the dimension-reduce space, in which clusters of non-convex
shape may become tight. There are some intriguing similarities between
spectral clustering methods and kernel PCA, which has been empirically
observed to perform clustering.
def specc(W: Array[Array[Double]], k: Int): SpectralClustering
def specc(data: Array[Array[Double]], k: Int, sigma: Double): SpectralClustering
// Nystrom approximation
def specc(data: Array[Array[Double]], k: Int, l: Int, sigma: Double): SpectralClustering
public class SpectralClustering {
public static SpectralClustering fit(DenseMatrix W, int k, int maxIter, double tol);
public static SpectralClustering fit(double[][] data, int k, double sigma, int maxIter, double tol);
// Nystrom approximation
public static SpectralClustering fit(double[][] data, int k, int l, double sigma, int maxIter, double tol);
}
fun specc(W: Array<DoubleArray>, k: Int): SpectralClustering
fun specc(data: Array<DoubleArray>, k: Int, sigma: Double): SpectralClustering
// Nystrom approximation
fun specc(data: Array<DoubleArray>, k: Int, l: Int, sigma: Double): SpectralClustering
where W
is the adjacency matrix of graph. The user may also
provide the raw input data
and the smooth/width parameter sigma
of Gaussian kernel, which is a somewhat sensitive parameter. To search for the best setting,
one may pick the value that gives the tightest clusters (smallest
distortion, reported by the method distortion
) in feature space.
Spectral clustering is memory intensive because of the adjacency matrix.
For large data, one may use Nystrom approximation by selecting some
random samples. The parameter l
specifies the number of
random samples.
val x = read.csv("data/clustering/sincos.txt", header=false, delimiter="\t").toArray()
val clusters = specc(x, 2, 0.2)
show(plot(x, clusters.y, 'o'))
var x = Read.csv("data/clustering/sincos.txt", CSVFormat.DEFAULT.withDelimiter('\t')).toArray();
var clusters = SpectralClustering.fit(x, 2, 0.2);
ScatterPlot.of(x, clusters.y, 'o').canvas().window();
val x = read.csv("data/clustering/sincos.txt", header=false, delimiter='\t').toArray()
val clusters = specc(x, 2, 0.2)
ScatterPlot.of(x, clusters.y, 'o').canvas().window()
For this nonconvex data, spectral clustering works very well with appropriate smooth parameter.
Minimum Entropy Clustering
In this algorithm, the clustering criterion is based on the conditional entropy
H(C | x)
, where C
is the cluster label and
x
is an observation. According to Fano's
inequality, we can estimate C
with a low probability of error only if the
conditional entropy H(C | x)
is small.
Minimum Entropy Clustering (MEC) also generalizes the criterion
by replacing Shannon's entropy with Havrda-Charvat's structural
α
-entropy. Interestingly, the minimum entropy criterion based
on structural α
-entropy is equal to the probability error of the
nearest neighbor method when α
= 2. To estimate
p(C | x)
, MEC employs
Parzen density estimation, a nonparametric approach.
This method performs very well especially when the exact number of clusters is unknown. The method can also correctly reveal the structure of data and effectively identify outliers simultaneously.
def mec[T](data: Array[T], distance: Distance[T], k: Int, radius: Double): MEC[T]
def mec[T](data: Array[T], nns: RNNSearch[T, T], k: Int, radius: Double, y: Array[Int]): MEC[T]
def mec(data: Array[Array[Double]], k: Int, radius: Double): MEC[Array[Double]]
public class MEC {
public static MEC<T> fit(T[] data, Distance<T> distance, int k, double radius);
public static MEC<T> fit(T[] data, RNNSearch<T, T> nns, int k, double radius, int[] y, double tol);
}
fun mec<T>(data: Array<T>, distance: Distance<T>, k: Int, radius: Double): MEC<T>
fun mec<T>(data: Array<T>, nns: RNNSearch<T, T>, k: Int, radius: Double, y: Array[Int]): MEC<T>
fun mec(data: Array<DoubleArray>, k: Int, radius: Double): MEC<DoubleArray>
MEC is an iterative algorithm starting with an initial partition given by any other clustering methods, e.g. K-Means, CLARNAS, hierarchical clustering, etc. Note that a random initialization is NOT appropriate.
val x = read.csv("data/clustering/rem.txt", header=false, delimiter=" ").toArray()
val clusters = mec(x, 20, 2.0)
show(plot(x, clusters.y, '.'))
var x = Read.csv("data/clustering/rem.txt", CSVFormat.DEFAULT.withDelimiter(' ')).toArray();
var clusters = MEC.fit(x, new EuclideanDistance(), 20, 2.0);
ScatterPlot.of(x, clusters.y, '.').canvas().window();
val x = read.csv("data/clustering/rem.txt", header=false, delimiter=' ').toArray()
val clusters = mec(x, 20, 2.0)
ScatterPlot.of(x, clusters.y, '.').canvas().window()
Note that we use k = 20
for this data and the algorithm still successfully
finds the correct structure of data. In practice, we rarely know the right number of
clusters in advance. With MEC, one may start with a large k
and the algorithm
often can automatically remove unnecessary clusters and reach a lower entropy state.