*k*-medoids clustering

collapse all in page

## Syntax

`idx = kmedoids(X,k)`

`idx = kmedoids(X,k,Name,Value)`

`[idx,C]= kmedoids(___)`

`[idx,C,sumd]= kmedoids(___)`

`[idx,C,sumd,D]= kmedoids(___)`

`[idx,C,sumd,D,midx]= kmedoids(___)`

`[idx,C,sumd,D,midx,info]= kmedoids(___)`

## Description

example

`idx = kmedoids(X,k)`

performs k-medoids Clustering to partition the observations of the *n*-by-*p* matrix `X`

into `k`

clusters, and returns an *n*-by-1 vector `idx`

containing cluster indices of each observation. Rows of `X`

correspond to points and columns correspond to variables. By default, `kmedoids`

uses squared Euclidean distance metric and the k-means++ algorithm for choosing initial cluster medoid positions.

`idx = kmedoids(X,k,Name,Value)`

usesadditional options specified by one or more `Name,Value`

pairarguments.

`[idx,C]= kmedoids(___)`

returns the k clustermedoid locations in the *k*-by-*p* matrix `C`

.

`[idx,C,sumd]= kmedoids(___)`

returns the within-clustersums of point-to-medoid distances in the *k*-by-1vector `sumd`

.

`[idx,C,sumd,D]= kmedoids(___)`

returns distances from eachpoint to every medoid in the *n*-by-*k* matrix `D`

.

`[idx,C,sumd,D,midx]= kmedoids(___)`

returns the indices `midx`

suchthat `C`

= X(`midx`

,:). `midx`

isa *k*-by-1 vector.

`[idx,C,sumd,D,midx,info]= kmedoids(___)`

returns a structure `info`

withinformation about the options used by the algorithm when executed.

## Examples

collapse all

### Group Data into Two Clusters

Open Live Script

Randomly generate data.

rng('default'); % For reproducibilityX = [randn(100,2)*0.75+ones(100,2); randn(100,2)*0.55-ones(100,2)];figure;plot(X(:,1),X(:,2),'.');title('Randomly Generated Data');

Group data into two clusters using `kmedoids`

. Use the `cityblock`

distance metric.

opts = statset('Display','iter');[idx,C,sumd,d,midx,info] = kmedoids(X,2,'Distance','cityblock','Options',opts);

rep iter sum 1 1 209.856 1 2 209.856Best total sum of distances = 209.856

`info`

is a struct that contains information about how the algorithm was executed. For example, `bestReplicate`

field indicates the replicate that was used to produce the final solution. In this example, the replicate number 1 was used since the default number of replicates is 1 for the default algorithm, which is `pam`

in this case.

info

`info = `*struct with fields:* algorithm: 'pam' start: 'plus' distance: 'cityblock' iterations: 2 bestReplicate: 1

Plot the clusters and the cluster medoids.

figure;plot(X(idx==1,1),X(idx==1,2),'r.','MarkerSize',7)hold onplot(X(idx==2,1),X(idx==2,2),'b.','MarkerSize',7)plot(C(:,1),C(:,2),'co',... 'MarkerSize',7,'LineWidth',1.5)legend('Cluster 1','Cluster 2','Medoids',... 'Location','NW');title('Cluster Assignments and Medoids');hold off

### Cluster Categorical Data Using k-Medoids

This example uses "Mushroom" data set [3][4][5] [6][7] from the UCI machine learning archive [7], described in https://archive.ics.uci.edu/dataset/73/mushroom. The data set includes 22 predictors for 8,124 observations of various mushrooms. The predictors are categorical data types. For example, cap shape is categorized with features of `'b'`

for bell-shaped cap and `'c'`

for conical. Mushroom color is also categorized with features of `'n'`

for brown, and `'p'`

for pink. The data set also includes a classification for each mushroom of either edible or poisonous.

Since the features of the mushroom data set are categorical, it is not possible to define the mean of several data points, and therefore the widely-used *k*-means clustering algorithm cannot be meaningfully applied to this data set. *k*-medoids is a related algorithm that partitions data into *k* distinct clusters, by finding medoids that minimize the sum of dissimilarities between points in the data and their nearest medoid.

The medoid of a set is a member of that set whose average dissimilarity with the other members of the set is the smallest. Similarity can be defined for many types of data that do not allow a mean to be calculated, allowing *k*-medoids to be used for a broader range of problems than *k*-means.

Using *k*-medoids, this example clusters the mushrooms into two groups, based on the predictors provided. It then explores the relationship between those clusters and the classifications of the mushrooms as either edible or poisonous.

This example assumes that you have downloaded the "Mushroom" data set [3][4][5] [6][7] from the UCI database (https://archive.ics.uci.edu/dataset/73/mushroom) and saved the text files agaricus-lepiota.data and agaricus-lepiota.names in your current directory. There is no column header line in the data, so `readtable`

uses the default variable names.

clear alldata = readtable('agaricus-lepiota.data','ReadVariableNames',false);

Display the first 5 mushrooms with their first few features.

data(1:5,1:10)

ans = Var1 Var2 Var3 Var4 Var5 Var6 Var7 Var8 Var9 Var10 ____ ____ ____ ____ ____ ____ ____ ____ ____ _____ 'p' 'x' 's' 'n' 't' 'p' 'f' 'c' 'n' 'k' 'e' 'x' 's' 'y' 't' 'a' 'f' 'c' 'b' 'k' 'e' 'b' 's' 'w' 't' 'l' 'f' 'c' 'b' 'n' 'p' 'x' 'y' 'w' 't' 'p' 'f' 'c' 'n' 'n' 'e' 'x' 's' 'g' 'f' 'n' 'f' 'w' 'b' 'k'

Extract the first column, labeled data for edible andpoisonous groups. Then delete the column.

labels = data(:,1);labels = categorical(labels{:,:});data(:,1) = [];

Store the names of predictors (features), which are described in agaricus-lepiota.names.

VarNames = {'cap_shape' 'cap_surface' 'cap_color' 'bruises' 'odor' ... 'gill_attachment' 'gill_spacing' 'gill_size' 'gill_color' ... 'stalk_shape' 'stalk_root' 'stalk_surface_above_ring' ... 'stalk_surface_below_ring' 'stalk_color_above_ring' ... 'stalk_color_below_ring' 'veil_type' 'veil_color' 'ring_number' .... 'ring_type' 'spore_print_color' 'population' 'habitat'};

Set the variable names.

data.Properties.VariableNames = VarNames;

There are a total of 2480 missing values denoted as `'?'`

.

`sum(char(data{:,:}) == '?')`

ans = 2480

Based on the inspection of the data set and its description,the missing values belong only to the 11th variable (`stalk_root`

).Remove the column from the table.

data(:,11) = [];

`kmedoids`

only accepts numeric data.You need to cast the categories you have into numeric type. The distancefunction you will use to define the dissimilarity of the data willbe based on the double representation of the categorical data.

cats = categorical(data{:,:});data = double(cats);

`kmedoids`

can use any distance metricsupported by pdist2 to cluster. For this example you will clusterthe data using the Hamming distance because this is an appropriatedistance metric for categorical data as illustrated below. The Hammingdistance between two vectors is the percentage of the vector componentsthat differ. For instance, consider these two vectors.

`v1 = [1 0 2 1]`

;

`v2 = [1 1 2 1]`

;

They are equal in the 1st, 3rd and 4th coordinate. Since 1 ofthe 4 coordinates differ, the Hamming distance between these two vectorsis .25.

You can use the function `pdist2`

to measurethe Hamming distance between the first and second row of data, thenumerical representation of the categorical mushroom data. The value.2857 means that 6 of the 21 features of the mushroom differ.

`pdist2(data(1,:),data(2,:),'hamming')`

ans = 0.2857

In this example, you’re clustering the mushroomdata into two clusters based on features to see if the clusteringcorresponds to edibility. The `kmedoids`

functionis guaranteed to converge to a local minima of the clustering criterion;however, this may not be a global minimum for the problem. It is agood idea to cluster the problem a few times using the `'replicates'`

parameter.When `'replicates'`

is set to a value, *n*,greater than 1, the k-medoids algorithm is run *n* times,and the best result is returned.

To run `kmedoids`

to cluster data into 2clusters, based on the Hamming distance and to return the best resultof 3 replicates, you run the following.

rng('default'); % For reproducibility[IDX, C, SUMD, D, MIDX, INFO] = kmedoids(data,2,'distance','hamming','replicates',3);

Let's assume that mushrooms in the predicted group 1 arepoisonous and group 2 are all edible. To determine the performanceof clustering results, calculate how many mushrooms in group 1 areindeed poisonous and group 2 are edible based on the known labels.In other words, calculate the number of false positives, false negatives,as well as true positives and true negatives.

Construct a confusion matrix (or matching matrix), where thediagonal elements represent the number of true positives and truenegatives, respectively. The off-diagonal elements represent falsenegatives and false positives, respectively. For convenience, usethe `confusionmat`

function, which calculates aconfusion matrix given known labels and predicted labels. Get thepredicted label information from the `IDX`

variable. `IDX`

containsvalues of 1 and 2 for each data point, representing poisonous andedible groups, respectively.

predLabels = labels; % Initialize a vector for predicted labels.predLabels(IDX==1) = categorical({'p'}); % Assign group 1 to be poisonous.predLabels(IDX==2) = categorical({'e'}); % Assign group 2 to be edible.confMatrix = confusionmat(labels,predLabels)

confMatrix = 4176 32 816 3100

Out of 4208 edible mushrooms, 4176 were correctly predictedto be in group 2 (edible group), and 32 were incorrectly predictedto be in group 1 (poisonous group). Similarly, out of 3916 poisonousmushrooms, 3100 were correctly predicted to be in group 1 (poisonousgroup), and 816 were incorrectly predicted to be in group 2 (ediblegroup).

Given this confusion matrix, calculate the accuracy, which isthe proportion of true results (both true positives and true negatives)against the overall data, and precision, which is the proportion ofthe true positives against all the positive results (true positivesand false positives).

accuracy = (confMatrix(1,1)+confMatrix(2,2))/(sum(sum(confMatrix)))

accuracy = 0.8956

precision = confMatrix(1,1) / (confMatrix(1,1)+confMatrix(2,1))

precision = 0.8365

The results indicated that applying the k-medoids algorithmto the categorical features of mushrooms resulted in clusters thatwere associated with edibility.

## Input Arguments

collapse all

`X`

— Data

numeric matrix

Data, specified as a numeric matrix. The rows of `X`

correspondto observations, and the columns correspond to variables.

`k`

— Number of medoids

positive integer

Number of medoids in the data, specified as a positive integer.

### Name-Value Arguments

Specify optional pairs of arguments as `Name1=Value1,...,NameN=ValueN`

, where `Name`

is the argument name and `Value`

is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.

* Before R2021a, use commas to separate each name and value, and enclose* `Name`

*in quotes.*

**Example: **`'Distance','euclidean','Replicates',3,'Options',statset('UseParallel',1)`

specifiesEuclidean distance, three replicate medoids at different startingvalues, and to use parallel computing.

`Algorithm`

— Algorithm to find medoids

`'pam'`

| `'small'`

| `'clara'`

| `'large'`

Algorithm to find medoids, specified as the comma-separatedpair consisting of `'Algorithm'`

and `'pam'`

, `'small'`

, `'clara'`

,or `'large'`

. The default algorithm depends on thenumber of rows of X.

If the number of rows of

`X`

isless than 3000,`'pam'`

is the default algorithm.If the number of rows is between 3000 and 10000,

`'small'`

isthe default algorithm.For all other cases,

`'large'`

isthe default algorithm.

You can override the default choice by explicitly stating thealgorithm. This table summarizes the available algorithms.

Algorithm | Description |
---|---|

`'pam'` | Partitioning Around Medoids (PAM) is the classical algorithm for solving the The algorithm proceeds as follows. Build-step: Each of *k*clusters is associated with a potential medoid. This assignment is performed using a technique specified by the`'Start'` name-value pair argument.Swap-step: Within each cluster, each point is tested as a potential medoid by checking if the sum of within-cluster distances gets smaller using that point as the medoid. If so, the point is defined as a new medoid. Every point is then assigned to the cluster with the closest medoid.
The algorithm iterates the build- and swap-steps until the medoids do not change, or other termination criteria are met. The algorithm can produce better solutions than the other algorithms in some situations, but it can be prohibitively long running. |

`'small'` | Use an algorithm similar to the k-means algorithm to find k medoids. This option employs a variant of the Lloyd’s iterations based on [2]. The algorithm proceeds as follows. For each point in each cluster, calculate the sum of distances from the point to every other point in the cluster. Choose the point that minimizes the sum as the new medoid. Update the cluster membership for each data point to reflect the new medoid.
The algorithm repeats these steps until no further updates occur or other termination criteria are met. The algorithm has an optional PAM-like online update phase (specified by the |

`'clara'` | Clustering LARge Applications (CLARA) [1] repeatedly performs the PAM algorithm on random subsets of the data. It aims to overcome scaling challenges posed by the PAM algorithm through sampling. The algorithm proceeds as follows. Select a subset of the data and apply the PAM algorithm to the subset. Assign points of the full data set to clusters by picking the closest medoid.
The algorithm repeats these steps until the medoids do not change, or other termination criteria are met. For the best performance, it is recommended that you perform multiple replicates. By default, the program performs five replicates. Each replicate samples |

`'large'` | This is similar to the |

**Example: **`'Algorithm','pam'`

`OnlinePhase`

— Flag to perform PAM-like online update phase

`'on'`

(default) | `'off'`

A flag to perform PAM-like online update phase, specified asa comma-separated pair consisting of `'OnlinePhase'`

and `'on'`

or `'off'`

.

If it is `on`

, then `kmedoids`

performsa PAM-like update to the medoids after the Lloyd iterations in the `small`

and `large`

algorithms.During this online update phase, the algorithm chooses a small subsetof data points in each cluster that are the furthest from and nearestto medoid. For each chosen point, it reassigns the clustering of theentire data set and check if this creates a smaller sum of distancesthan the best known.

In other words, the swap considerations are limited to the pointsnear the medoids and far from the medoids. The near points are consideredin order to refine the clustering. The far points are considered inorder to escape local minima. Turning on this feature tends to improvethe quality of solutions generated by both algorithms. Total run timetends to increase as well, but the increase typically is less thanone iteration of PAM.

**Example: **`OnlinePhase,'off'`

`Distance`

— Distance metric

`'sqEuclidean'`

(default) | `'euclidean'`

| character vector | string scalar | function handle | ...

Distance metric, specified as the name of a distance metric described in the following table, or a function handle. `kmedoids`

minimizes the sum of medoid to cluster member distances.

Value | Description |
---|---|

`'sqEuclidean'` | Squared Euclidean distance (default) |

`'euclidean'` | Euclidean distance |

`'seuclidean'` | Standardized Euclidean distance. Each coordinate difference between observations is scaled by dividing by the corresponding element of the standard deviation, |

`'cityblock'` | City block distance |

`'minkowski'` | Minkowski distance. The exponent is 2. |

`'chebychev'` | Chebychev distance (maximum coordinate difference) |

`'mahalanobis'` | Mahalanobis distance using the sample covariance of |

`'cosine'` | One minus the cosine of the included angle between points (treated as vectors) |

`'correlation'` | One minus the sample correlation between points (treated as sequences of values) |

`'spearman'` | One minus the sample Spearman's rank correlation between observations (treated as sequences of values) |

`'hamming'` | Hamming distance, which is the percentage of coordinates that differ |

`'jaccard'` | One minus the Jaccard coefficient, which is the percentage of nonzero coordinates that differ |

`@` | Custom distance function handle. A distance function has the form function D2 = distfun(ZI,ZJ)% calculation of distance... `ZI` is a`1` -by-`n` vector containing a single observation.`ZJ` is an`m2` -by-`n` matrix containing multiple observations.`distfun` must accept a matrix`ZJ` with an arbitrary number of observations.`D2` is an`m2` -by-`1` vector of distances, and`D2(k)` is the distance between observations`ZI` and`ZJ(k,:)` .
If your data is not sparse, you can generally compute distance more quickly by using a built-in distance instead of a function handle. |

For the definition of each distance metric, see Distance Metrics.

**Example: **`'Distance','hamming'`

`Options`

— Options to control iterative algorithm to minimize fitting criteria

`[]`

(default) | structure array returned by `statset`

Options to control the iterative algorithm to minimize fitting criteria, specified as the comma-separated pair consisting of `'Options'`

and a structure array returned by statset. Supported fields of the structure array specify options for controlling the iterative algorithm. This table summarizes the supported fields.

Field | Description |
---|---|

`Display` | Level of display output. Choices are `'off'` (default)and `'iter'` . |

`MaxIter` | Maximum number of iterations allowed. The default is `100` . |

`UseParallel` | If `true` , compute in parallel. If Parallel Computing Toolbox™ is not available, then computation occurs in serial mode. The default is `false` , meaning serial computation. |

`UseSubstreams` | Set to `true` to compute in a reproducible fashion. The default is `false` . To compute reproducibly, you must also set `Streams` to a type allowing substreams: `'mlfg6331_64'` or `'mrg32k3a'` . |

`Streams` | A `RandStream` object or cell array of suchobjects. For details about these options and parallel computing in Statistics and Machine Learning Toolbox™,see Speed Up Statistical Computations or enter `helpparallelstats` at the command line. |

**Example: **`'Options',statset('Display','off')`

`Replicates`

— Number of times to repeat clustering using new initial cluster medoid positions

positive integer

Number of times to repeat clustering using new initial clustermedoid positions, specified as a positive integer. The default valuedepends on the choice of algorithm. For `pam`

and `small`

,the default is 1. For `clara`

, the default is 5.For `large`

, the default is 3.

**Example: **`'Replicates',4`

`NumSamples`

— Number of samples to take from data when executing `clara`

algorithm

`40+2*k`

(default) | positive integer

Number of samples to take from the data when executing the `clara`

algorithm, specified as a positive integer. The default number of samples is calculated as `40+2*k`

.

**Example: **`'NumSamples',160`

`PercentNeighbors`

— Percent of data set to examine using large algorithm

0.001 (default) | scalar value between 0 and 1

Percent of the data set to examine using the `large`

algorithm,specified as a positive number.

The program examines `percentneighbors*size(X,1)`

numberof neighbors for the medoids. If there is no improvement in the within-clustersum of distances, then the algorithm terminates.

The value of this parameter between 0 and 1, where a value closerto 1 tends to give higher quality solutions, but the algorithm takeslonger to run, and a value closer to 0 tends to give lower qualitysolutions, but finishes faster.

**Example: **`'PercentNeighbors',0.01`

`Start`

— Method for choosing initial cluster medoid positions

`'plus'`

(default) | `'sample'`

| `'cluster'`

| matrix

Method for choosing initial cluster medoid positions, specifiedas the comma-separated pair consisting of `'Start'`

and `'plus'`

, `'sample'`

, `'cluster'`

,or a matrix. This table summarizes the available methods.

Method | Description |
---|---|

`'plus'` (default) | Select k observations from X accordingto the k-means++algorithm for cluster center initialization. |

`'sample'` | Select `k` observations from `X` atrandom. |

`'cluster'` | Perform preliminary clustering phase on a random subsample(10%) of `X` . This preliminary phase is itselfinitialized using `sample` , that is, the observationsare selected at random. |

matrix | A custom `k` -by-p matrixof starting locations. In this case, you can pass in `[]` forthe `k` input argument, and `kmedoids` infers `k` fromthe first dimension of the matrix. You can also supply a 3-D array,implying a value for `'Replicates'` from the array’sthird dimension. |

**Example: **`'Start','sample'`

**Data Types: **`char`

| `string`

| `single`

| `double`

## Output Arguments

collapse all

`idx`

— Medoid indices

numeric column vector

Medoid indices, returned as a numeric column vector. `idx`

hasas many rows as X, and each row indicates themedoid assignment of the corresponding observation.

`C`

— Cluster medoid locations

numeric matrix

Cluster medoid locations, returned as a numeric matrix. `C`

isa *k*-by-*p* matrix, where row *j* isthe medoid of cluster *j*

`sumd`

— Within-cluster sums of point-to-medoid distances

numeric column vector

Within-cluster sums of point-to-medoid distances, returned asa numeric column vector. `sumd`

is a k-by1vector, where element *j* is the sum of point-to-medoiddistances within cluster *j*.

`D`

— Distances from each point to every medoid

numeric matrix

Distances from each point to every medoid, returned as a numericmatrix. `D`

is an *n*-by-k matrix,where element (*j*,*m*) is the distancefrom observation *j* to medoid *m*.

`midx`

— Index to `X`

column vector

Index to X, returned as a column vectorof indices. `midx`

is a k-by-1vector and the indices satisfy `C = X(midx,:)`

.

`info`

— Algorithm information

struct

Algorithm information, returned as a struct. `info`

contains options used by the function when executed such as *k*-medoid clustering algorithm (`algorithm`

), method used to choose initial cluster medoid positions (`start`

), distance metric (`distance`

), number of iterations taken in the best replicate (`iterations`

) and the replicate number of the returned results (`bestReplicate`

).

## More About

collapse all

*k*-medoids Clustering

*k*-medoids clustering is a partitioning method commonly used in domains that require robustness to outlier data, arbitrary distance metrics, or ones for which the mean or median does not have a clear definition.

It is similar to *k*-means, and the goal ofboth methods is to divide a set of measurements or observations into *k* subsetsor clusters so that the subsets minimize the sum of distances betweena measurement and a center of the measurement’s cluster. Inthe *k*-means algorithm, the center of the subsetis the mean of measurements in the subset, often called a centroid.In the *k*-medoids algorithm, the center of the subsetis a member of the subset, called a medoid.

The *k*-medoids algorithm returns medoids whichare the actual data points in the data set. This allows you to usethe algorithm in situations where the mean of the data does not existwithin the data set. This is the main difference between *k*-medoidsand *k*-means where the centroids returned by *k*-meansmay not be within the data set. Hence *k*-medoidsis useful for clustering categorical data where a mean is impossibleto define or interpret.

The function `kmedoids`

provides severaliterative algorithms that minimize the sum of distances from eachobject to its cluster medoid, over all clusters. One of the algorithmsis called partitioning around medoids (PAM) [1] whichproceeds in two steps.

Build-step: Each of

*k*clustersis associated with a potential medoid. This assignment is performedusing a technique specified by the`'Start'`

name-valuepair argument.Swap-step: Within each cluster, each point is testedas a potential medoid by checking if the sum of within-cluster distancesgets smaller using that point as the medoid. If so, the point is definedas a new medoid. Every point is then assigned to the cluster withthe closest medoid.

The algorithm iterates the build- and swap-steps untilthe medoids do not change, or other termination criteria are met.

You can control the details of the minimization using severaloptional input parameters to `kmedoids`

, includingones for the initial values of the cluster medoids, and for the maximumnumber of iterations. By default, `kmedoids`

usesthe k-means++algorithm for cluster medoid initialization and the squaredEuclidean metric to determine distances.

## References

[1] Kaufman, L., and Rousseeuw, P. J.(2009). Finding Groups in Data: An Introduction to Cluster Analysis.Hoboken, New Jersey: John Wiley & Sons, Inc.

[2] Park, H-S, and Jun, C-H. (2009). Asimple and fast algorithm for K-medoids clustering. Expert Systemswith Applications. 36, 3336-3341.

[3] Schlimmer,J.S. (1987). Concept AcquisitionThrough Representational Adjustment (Technical Report 87-19). Doctoraldissertation, Department of Information and Computer Science, Universityof California, Irvine.

[4] Iba,W., Wogulis,J., and Langley,P. (1988).Trading off Simplicity and Coverage in Incremental Concept Learning.In Proceedings of the 5th International Conference on Machine Learning,73-79. Ann Arbor, Michigan: Morgan Kaufmann.

[5] Duch W, A.R., and Grabczewski, K. (1996) Extraction of logical rules from training data using backpropagation networks. Proc. of the 1st Online Workshop on Soft Computing, 19-30, pp. 25-30.

[6] Duch, W., Adamczak, R., Grabczewski, K.,Ishikawa, M., and Ueda, H. (1997). Extraction of crisp logical rulesusing constrained backpropagation networks - comparison of two newapproaches. Proc. of the European Symposium on Artificial Neural Networks(ESANN'97), Bruge, Belgium 16-18.

[7] Bache, K. and Lichman, M. (2013). UCIMachine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine,CA: University of California, School of Information and Computer Science.

## Extended Capabilities

### Automatic Parallel Support

Accelerate code by automatically running computation in parallel using Parallel Computing Toolbox™.

To run in parallel, specify the `Options`

name-value argument in the call to this function and set the `UseParallel`

field of the options structure to `true`

using `statset`

:

`Options=statset(UseParallel=true)`

For more information about parallel computing, see Run MATLAB Functions with Automatic Parallel Support (Parallel Computing Toolbox).

## Version History

**Introduced in R2014b**

## See Also

clusterdata | kmeans | linkage | silhouette | pdist | linkage | evalclusters

## MATLAB Command

You clicked a link that corresponds to this MATLAB command:

Run the command by entering it in the MATLAB Command Window. Web browsers do not support MATLAB commands.

Select a Web Site

Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .

You can also select a web site from the following list:

### Americas

- América Latina (Español)
- Canada (English)
- United States (English)

### Europe

- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)

- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- Deutsch
- English
- Français

- United Kingdom (English)

Contact your local office