k-medoids is a classical partitioning technique of clustering that splits the data set of n objects into k clusters, where the number k of clusters assumed known a priori (which implies that the programmer must specify k before the execution of a k-medoids algorithm). The "goodness" of the given value of k can be assessed with methods such as the silhouette method. The name of the clustering method was coined by Leonard Kaufman and Peter J. Rousseeuw with their PAM (Partitioning Around Medoids) algorithm.[1]
The medoid of a cluster is defined as the object in the cluster whose sum (and, equivalently, the average) of dissimilarities to all the objects in the cluster is minimal, that is, it is a most centrally located point in the cluster. Unlike certain objects used by other algorithms, the medoid is an actual point in the cluster.
In general, the k-medoids problem is NP-hard to solve exactly.[2] As such, multiple heuristics to optimize this problem exist.
PAM[3] uses a greedy search which may not find the optimum solution, but it is faster than exhaustive search. It works as follows:
The runtime complexity of the original PAM algorithm per iteration of (3) is O ( k ( n − k ) 2 ) {\displaystyle O(k(n-k)^{2})} , by only computing the change in cost. A naive implementation recomputing the entire cost function every time will be in O ( n 2 k 2 ) {\displaystyle O(n^{2}k^{2})} . This runtime can be further reduced to O ( n 2 ) {\displaystyle O(n^{2})} , by splitting the cost change into three parts such that computations can be shared or avoided (FastPAM). The runtime can further be reduced by eagerly performing swaps (FasterPAM),[4] at which point a random initialization becomes a viable alternative to BUILD.
Algorithms other than PAM have also been suggested in the literature, including the following Voronoi iteration method known as the "Alternating" heuristic in literature, as it alternates between two optimization steps:[5][6][7]
k-means-style Voronoi iteration tends to produce worse results, and exhibit "erratic behavior".[8]: 957 Because it does not allow re-assigning points to other clusters while updating means it only explores a smaller search space. It can be shown that even in simple cases this heuristic finds inferior solutions the swap based methods can solve.[4]
Multiple variants of hierarchical clustering with a "medoid linkage" have been proposed. The Minimum Sum linkage criterion[9] directly uses the objective of medoids, but the Minimum Sum Increase linkage was shown to produce better results (similar to how Ward linkage uses the increase in squared error). Earlier approaches simply used the distance of the cluster medoids of the previous medoids as linkage measure,[10][11] but which tends to result in worse solutions, as the distance of two medoids does not ensure there exists a good medoid for the combination. These approaches have a run time complexity of O ( n 3 ) {\displaystyle O(n^{3})} , and when the dendrogram is cut at a particular number of clusters k, the results will typically be worse than the results found by PAM.[9] Hence these methods are primarily of interest when a hierarchical tree structure is desired.
Other approximate algorithms such as CLARA and CLARANS trade quality for runtime. CLARA applies PAM on multiple subsamples, keeping the best result. By setting the sample size to O ( N ) {\displaystyle O({\sqrt {N}})} , a linear runtime (just as to k-means) can be achieved. CLARANS works on the entire data set, but only explores a subset of the possible swaps of medoids and non-medoids using sampling. BanditPAM uses the concept of multi-armed bandits to choose candidate swaps instead of uniform sampling as in CLARANS.[12]
The k-medoids problem is a clustering problem similar to k-means. Both the k-means and k-medoids algorithms are partitional (breaking the dataset up into groups) and attempt to minimize the distance between points labeled to be in a cluster and a point designated as the center of that cluster. In contrast to the k-means algorithm, k-medoids chooses actual data points as centers (medoids or exemplars), and thereby allows for greater interpretability of the cluster centers than in k-means, where the center of a cluster is not necessarily one of the input data points (it is the average between the points in the cluster known as the centroid). Furthermore, k-medoids can be used with arbitrary dissimilarity measures, whereas k-means generally requires Euclidean distance for efficient solutions. Because k-medoids minimizes a sum of pairwise dissimilarities instead of a sum of squared Euclidean distances, it is more robust to noise and outliers than k-means.
Despite these advantages, the results of k-medoids lack consistency since the results of the algorithm may vary. This is because the initial medoids are chosen at random during the performance of the algorithm. k-medoids is also not suitable for clustering objects that are not spherical and may work inefficiently when dealing with large datasets depending on how it is implemented. Meanwhile, k-means is suitable for well-distributed and isotropic clusters and can handle larger datasets.[13] Similarly to k-medoids however, k-means also uses random initial points which varies the results the algorithm finds.
any noise.[14]
Several software packages provide implementations of k-medoids clustering algorithms. These implementations vary in their algorithmic approaches and computational efficiency.
The scikit-learn-extra[16] package includes a KMedoids class that implements k-medoids clustering with a Scikit-learn compatible interface. It offers two algorithm choices:
Parameters include:
Example Python usage:
from sklearn_extra.cluster import KMedoids kmedoids = KMedoids(n_clusters=2, method='pam').fit(X) print(kmedoids.labels_)
The python-kmedoids[17] package provides optimized implementations of PAM and related algorithms:
This package requires precomputed dissimilarity matrices and includes silhouette-based methods for evaluating clusters.
import kmedoids fp = kmedoids.fasterpam(dissimilarity_matrix, n_clusters=2) print(fp.medoids)
variant = "faster"
medoids = "random"