Different kinds of loss function in machine learning

 





A loss function is a means of how sensible your forecasting model does in terms of being capable to prognosticate the awaited conclusion (or value). We convert the learning case into an optimization problem, define a loss function and also optimize the algorithm to minimize the loss function. 


  Different kinds of the loss function in machine learning which are as follows 

 

 Regression loss functions 

Linear regression is a elemental conception of this function. Regression loss functions demonstrate a linear connection between a dependent variable (Y) and an independent variable (X); hence we try to serve the neat line in way on these variables. 

 

  •  Mean Squared Error Loss 

 MSE (L2 error) measures the average squared dissimilarity between the real and forecast values by the model. The output is a single number companied with a set of values. Our end is to demote MSE to enhance the delicacy of the model. 


  •  Mean Squared Logarithmic Error Loss (MSLE) 

 MSLE measures the rate between actual and awaited value. It introduces an asymmetry in the error curvature. MSLE only cares about the percentual dissimilarity between actual and forecast values. It can be a reasonable alternative as a loss function when we claim to foretell house deals prices, bakery deals prices, and the data is continual. 

  • Mean Absolute Error (MAE) 

 MAE calculates the sum of absolute differences between real and forecast variables. That means it measures the average magnitude of errors in a set of forecast values. applying the mean square error is simple to break, but applying the absolute error is more robust to outliers. Outliers are those values, which diverge extremely from other observed data points. 

 

 Binary Classification Loss Functions 

The name is enough self-explicatory. Binary Classification refers to entrusting an object into one of two classes. This classification is predicated on a rule appertained to the input attribute vector. 

 

  •  Binary Cross Entropy Loss 

 Generally, we apply entropy to mean disorder or misdoubt. It's gauged for a aimless variable X with probability distribution p (X) 


  •  Hinge Loss 

 Hinge loss is primarily applied with Support Vector Machine (SVM) Classifiers with class labels-1 and 1. So form sure you alter the tag of the‘ Malignant’ class in the dataset from 0 to-1. 


Multi-Class Classification Loss Functions 

 Setosa, Versicolor and Virginia, in alike cases where the target variable has further than two classes Multi-Class Classification Loss function is applied. 

 

  •   Categorical Cross Entropy Loss 


  •  Kullback Leibler Divergence Loss 

 Kullback Leibler Divergence Loss calculates how greatly a contributed distribution is off from the true distribution. 


Conclusion 

Here, we learned about different types of loss function in machine learning. You can also visit what is loss function in deep learning.

Comments

Popular posts from this blog

Tuples in Python

Cross Entropy