SUPPORT VECTOR MACHINE REGRESSION

SVR (Support Vector Regression) applies the same idea as SVM, but to regression issues. Let's take a few moments to comprehend the concept of SVR. On the basis of a training sample, the aim of regression is to identify a function that approximates mapping from an input domain to real numbers. So, let's take a closer look at how SVR works in practice. Consider the two red lines to be the decision boundaries, and the green line to be the hyperplane. When we go forward with SVR, our goal is to simply evaluate the points that are within the decision boundary line. The hyperplane with the greatest number of points is our best fit line.

Assuming that the equation of the hyperplane as follows: Y = wx+b (equation of hyperplane). Then the equations of decision boundary become wx+b= +a and wx+b= -a. And thus, any hyperplane that satisfies our SVR should satisfy -a < Y- wx+b < +a . The key goal here is to choose a decision boundary that is 'a' distance from the initial hyperplane and contains data points closest to the hyperplane or support vectors. As a result, we will only consider points that are inside the decision border and have the lowest mistake rate, or those are within the Margin of Tolerance. This results in a more accurate model.

THE MAIN CODE OF SUPPORT VECTOR REGRESSION

  • from sklearn.svm import SVR

Full Code Of Implementing Support Vector Regression (SVR)

It's time to put our coding hats on! See the code below to see the differences of Support Vector Regression (SVR) using linear and non-linear kernels