Understanding Curse of Dimensionality

curse of dimensionality

Curse of Dimensionality refers to a set of problems that arise when working with high-dimensional data. The dimension of a dataset corresponds to the number of attributes/features that exist in a dataset. A dataset with a large number of attributes, generally of the order of a hundred or more, is referred to as high dimensional data. Some of the difficulties that come with high dimensional data manifest during analyzing or visualizing the data to identify patterns, and some manifest while training machine learning models. The difficulties related to training machine learning models due to high dimensional data is referred to as ‘Curse of Dimensionality’. The popular aspects of the curse of dimensionality; ‘data sparsity’ and ‘distance concentration’ are discussed in the following sections.

Contributed by: Arun K

What is the curse of dimensionality?

Curse of Dimensionality refers to a set of problems that arise when working with high-dimensional data. The dimension of a dataset corresponds to the number of attributes/features that exist in a dataset. A dataset with a large number of attributes, generally of the order of a hundred or more, is referred to as high dimensional data. Some of the difficulties that come with high dimensional data manifest during analyzing or visualizing the data to identify patterns, and some manifest while training machine learning models. The difficulties related to training machine learning models due to high dimensional data are referred to as the ‘Curse of Dimensionality’.

Domains of curse of dimensionality

There are a lot of domains where the direct effect of the curse of dimensionality can be seen, “Machine Learning” being the most effective.

Domains of the curse of dimensionality are listed below :

Anomaly Detection

Anomaly detection is used for finding unforeseen items or events in the dataset. In high-dimensional data anomalies often show a remarkable number of attributes which are irrelevant in nature; certain objects occur more frequently in neighbour lists than others.

Combinatorics

Whenever, there is an increase in the number of possible input combinations it fuels the complexity to increase rapidly, and the curse of dimensionality occurs.

Machine Learning

In Machine Learning, a marginal increase in dimensionality also requires a large increase in the volume in the data in order to maintain the same level of performance. The curse of dimensionality is the by-product of a phenomenon which appears with high-dimensional data.

How To Combat The CoD?

Combating COD is not such a big deal until we have dimensionality reduction. Dimensionality Reduction is the process of reducing the number of input variables in a dataset, also known as the process of converting the high-dimensional variables into lower-dimensional variables without changing their attributes of the same. 

It does not contain any extra variables, which makes it very simple for analysts to analyze the data leading to faster results for algorithms. 

More Machine Learning Concepts

Data Sparsity

The supervised machine learning models are trained to predict the outcome for a given input data sample accurately. While training a model, the available data is used such that part of the data is used for training the model, and a part of the data is used to evaluate how the model performs on unseen data. This evaluation step helps us establish whether the model is generalized or not. Model generalization refers to the models’ ability to predict the outcome for unseen input data accurately. It is important to note that the unseen input data has to come from the same distribution as the one used to train the model. A generalized model’s prediction accuracy on the unseen data should be very close to its accuracy on the training data. An effective way to build a generalized model is to capture different possible combinations of the values of predictor variables and the corresponding targets.

For instance, if we are trying to predict a target, that is dependent on two attributes: gender and age group, we should ideally capture the targets for all possible combinations of values for the two attributes as shown in figure 1. If this data is used to train a model that is capable of learning the mapping between the attribute values and the target, its performance could be generalized. As long as the future unseen data comes from this distribution (a combination of values), the model would predict the target accurately. 

In the above example, we assume that the target value depends on gender and age group only. If the target depends on a third attribute, let’s say body type, the number of training samples required to cover all the combinations increases phenomenally. The combinations are shown in figure 2. For two variables, we needed eight training samples. For three variables, we need 24 samples.

The above examples show that, as the number of attributes or the dimensions increases, the number of training samples required to generalize a model also increases phenomenally. 

In reality, the available training samples may not have observed targets for all combinations of the attributes. This is because some combination occurs more often than others. Due to this, the training samples available for building the model may not capture all possible combinations. This aspect, where the training samples do not capture all combinations, is referred to as ‘Data sparsity’ or simply ‘sparsity’ in high dimensional data. Data sparsity is one of the facets of the curse of dimensionality. Training a model with sparse data could lead to high-variance or overfitting conditions. This is because while training the model, the model has learnt from the frequently occurring combinations of the attributes and can predict the outcome accurately. In real-time when less frequently occurring combinations are fed to the model, it may not predict the outcome accurately. 

Distance Concentration

Another facet of the curse of dimensionality is ‘Distance Concentration’. Distance concentration refers to the problem of all the pairwise distances between different samples/points in the space converging to the same value as the dimensionality of the data increases. Several machine learning models such as clustering or nearest neighbours’ methods use distance-based metrics to identify similarities or proximity of the samples. Due to distance concentration, the concept of proximity or similarity of the samples may not be qualitatively relevant in higher dimensions. Figure 3 shows this aspect graphically [1]. A fixed number of random points are generated from a uniform distribution on a ‘d’ dimensional torus. The ‘d’ here corresponds to the number of dimensions considered at a time.

If you want to get more in-depth knowledge on Clustering, check out our Free Course on Clustering in R at Great Learning Academy. The course will explain machine learning using real-world datasets to ensure that learning is practical and hands-on.

Also Read: Artificial Intelligence Tutorial for Beginners

A density plot of the distances between the points and the probability of frequency of occurrence of the distance is created for different dimensions. For one-dimensional torus, we see that the density is approximately uniform. As the number of dimensions increases, we see that the spread of the frequency plot decreases indicating that distances between different samples or points tend towards a single value as the dimension increases. Figure 4 shows the decrease in the standard deviation of the distribution as the number of dimensions increases.

Aggarwal [2] presented another interesting aspect of distance concentration. For ‘Lk norm-based distance metrics, their relevance in higher dimensions is subjective to the value of k. The L1 norm or Manhattan distance is preferred to the L2 norm or the Euclidean distance for high dimensional data processing. This indicates that the choice of distance metric in algorithms such as KNN or K-means or clustering, that work for lower dimensions may not work for higher dimensions.

Mitigating Curse of Dimensionality

To mitigate the problems associated with high dimensional data a suite of techniques generally referred to as ‘Dimensionality reduction techniques are used. Dimensionality reduction techniques fall into one of the two categories- ‘Feature selection’ or ‘Feature extraction.

Feature selection Techniques

In feature selection techniques, the attributes are tested for their worthiness and then selected or eliminated. Some of the commonly used Feature selection techniques are discussed below.

Low Variance filter:  In this technique, the variance in the distribution of all the attributes in a dataset is compared and attributes with very low variance are eliminated. Attributes that do not have such much variance will assume an almost constant value and do not contribute to the predictability of the model.

High Correlation filter: In this technique, the pair wise correlation between attributes is determined. One of the attributes in the pairs that show very high correlation is eliminated and the other retained. The variability in the eliminated attribute is captured through the retained attribute.

Multicollinearity: In some cases, the high correlation may not be found for pairs of attributes but if each attribute is regressed as a function of others, we may see that variability of some of the attributes are completely captured by the others. This aspect is referred to as multicollinearity and Variance Inflation Factor (VIF) is a popular technique used to detect multicollinearity. Attributes with high VIF values, generally greater than 10, are eliminated.

Feature Ranking: Decision Tree models such as CART can rank the attributes based on their importance or contribution to the predictability of the model. In high dimensional data, some of the lower ranked variables could be eliminated to reduce the dimensions.

Feature Extraction Techniques 

In feature extraction techniques, the high dimensional attributes are combined in low dimensional components (PCA or ICA) or factored into low dimensional factors (FA). 

Principal Component Analysis (PCA)

Principal Component Analysis, or PCA, is a dimensionality-reduction technique in which high dimensional correlated data is transformed to a lower dimensional set of uncorrelated components, referred to as principal components. The lower dimensional principle components capture most of the information in the high dimensional dataset. An ‘n’ dimensional data is transformed into ‘n’ principle components and a subset of these ‘n’ principle components is selected based on the percentage of variance in the data intended to be captured through the principle components. Figure 5 shows a simple example in which a 10-dimensional data is transformed to 10-principle components. To capture 90% of the variance in the data only 3 principle components are needed. Hence, we have reduced a 10-dimensional data to 3-dimensions.

Figure 5. Example of converting 10-dimensional data to 3-dimensional data through PCA

Factor Analysis (FA)

Factor analysis is based on the assumption that all the observed attributes in a dataset can be represented as a weighted linear combination of latent factors. The intuition in this technique is that an ‘n’ dimensional data can be represented by ‘m’ factors (m<n). The main difference between PCA and FA is in the fact that While PCA synthesizes components from the base attributes, FA decomposes the attributes into latent factors as shown in figure 6.

Independent Component Analysis (ICA)

ICA assumes that all the attributes are essentially a mixture of independent components and resolves the variables into a combination of these independent components. ICA is perceived to be more robust than PCA and is generally used when PCA and FA fail.  

If you found this helpful, and wish to learn more about such concepts, join Great Learning Academy’s free online courses today!

→ Explore this Curated Program for You ←

Avatar photo
Great Learning Editorial Team
The Great Learning Editorial Staff includes a dynamic team of subject matter experts, instructors, and education professionals who combine their deep industry knowledge with innovative teaching methods. Their mission is to provide learners with the skills and insights needed to excel in their careers, whether through upskilling, reskilling, or transitioning into new fields.

Recommended AI Courses

MIT No Code AI and Machine Learning Program

Learn Artificial Intelligence & Machine Learning from University of Texas. Get a completion certificate and grow your professional career.

4.70 ★ (4,175 Ratings)

Course Duration : 12 Weeks

AI and ML Program from UT Austin

Enroll in the PG Program in AI and Machine Learning from University of Texas McCombs. Earn PG Certificate and and unlock new opportunities

4.73 ★ (1,402 Ratings)

Course Duration : 7 months

Scroll to Top