G

Multi-Class Classification

Classification Machine Learning and Supervised Learning

Before diving into the topic of Classification, it is meaningful to understand what Supervised Learning is. Imagine learning a new mathematical concept and after attempting a problem, you refer to the provided solutions to confirm if your answer is correct. Once you gain enough confidence in dealing with such problems, you no longer need to check the solutions and can solve problems independently. Similarly, machine learning models operate with Supervised Learning by learning from their actions. We equip our model with the correct labels along with our input data. The model during its training observes and understands how these labels correspond to our data, helping it pick out patterns between the data and labels. Supervised learning can be broadly categorized as classification and regression. In the follows, we'll only be discussing classification, specifically multi-class classification.

The Procedure of Classification

Classification is a process where we identify, understand, and sort items or ideas into pre-determined categories, often referred to as "sub-populations". In the realm of machine learning, various techniques are utilized to classify future datasets fittingly into these predefined categories based on input training data. The machine learning algorithms on board essentially predict the odds of upcoming data belonging to any of these specific categories.

Classification tasks in machine learning can be primarily branched into:

  1. Binary Classification: Here, we have a classification problem having two class labels. Usually, one class signifies the standard condition while the other denotes an abnormal condition.
  2. Multi-Class Classification: In these tasks, we have more than two class labels. Unlike binary classification, multi-class classification in machine learning doesn't differentiate between usual and unusual results. Instead, instances are allocated to one of several predefined classes.
  3. Multi-Label Classification: Here, we have more than two class labels, where one or more class labels could be predicted for each instance. This differs from binary and multi-class classification, where only a single class label is predicted for each case.

Multi-Class Classification in Detail

Multi-class classification is a widely performed machine learning task, excluding regression.

The underlying principle remains the same whether it's notated as multiclass or multi-class. An ML classification problem bearing more than two outcomes or classes is labelled as multi feature classification. For example, classifying animal species in images from an encyclopedia using a machine learning model could be a case of multi-class classification, as each picture can classify into multiple distinct animal categories. However, only one class can be used in a sample in multi-class classification (for instance, an elephant is just an elephant, not a lemur).

We are given a collection of training samples divided into K different categories, and we devise an ML model to predict to which of these classes some previously unseen data belongs. The model understands the characteristic patterns of each class from the training dataset and applies them to predict the classification of future data.

Few Popular Multi-Class Classification Algorithms

  • Decision Trees: This method formulates the classification model in the form of a tree structure employing if-then rules. These rules for classification are both exhaustive and mutually exclusive. The data continually breaks down into smaller structures, finally forming an incremental decision tree. The training data is used to learn these rules and once a rule is learned, the tuples covering the rules are removed. This process repeats until a termination point is achieved in the training set.

k-Nearest Neighbors: This lazy learning method operates in an n-dimensional space and retains all partitions related to training data. It is termed as 'lazy' since it focuses on maintaining portions of training data instead of curating a generalized internal model. Each point is classified by conducting a majority vote of its k closest neighbors. It is supervised and leverages a set of known points to classify or label other points. It looks for the closest labeled points or nearest neighbours to the new point to label it. Whichever label receives maximum votes among these neighbours, it becomes the label of the new point.

Integrate | Scan | Test | Automate

Detect hidden vulnerabilities in ML models, from tabular to LLMs, before moving to production.