## Saturday, 7 November 2015

### Using SVM Classifier

Support vector Machine (SVM) is one of most famous machine learning tool for classification problem. This is supervised learning technique .Read More
 SVM Margin
we are going to see how to use SVM classifier in python.

Our Demonstration uses digit dataset . This dataset uses 64 feature vector to identify handwritten digit [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] . It means this 64 feature extracted from Handwritten digit used to classify handwritten digit in 9 classes .

So Again we gonna use skilearn Python Package .

Lets see some about Some of Parameter of our classifier that is used to customize our classification .
We discuss only few most useful parameters .

#### What is Over-fitting

From this figure you can see the Difference . But real question is which situation is good. And Answer for this problem is figure1 . Overfitting Situation is not good but it looks like that it will gives better result . but when you will apply this classifier for general new data you will see that performance of over fitting is less then proper fitted hyperplain .

Hyperplain : Line or plain in dimensional space that is used as boundary of classification  .

## SVM Classifier [Python] [Github link]

We Use rbf kernel.

#### __auther__='www.codecops.in' #SVM Classifier import sklearn.datasets as data from sklearn.svm import SVC import numpy as np # Get digits Dataset**** data=data.load_digits() X=data.data #feature vector Y=data.target #Label Vector # ************************** if __name__=='__main__':     # clsf = classifier     clsf=SVC(kernel='rbf',gamma=0.001,C=0.1) #SVM Classier     # There are other argument like     # [[[C, cache_size, class_weight, coef0,     # decision_function_shape, gamma, kernel,     # max_iter, probability, random_state,shrinking,     # tol,verbose]]]     # you can pass in order to custmize your classifier     # Trainig Classifier  ***     clsf.fit(X, Y)     #Now predict values for given classifier     prediction = clsf.predict(X)     #     print 'printing data for few classification'     for i in [4,50,200,300,600,700,900,1100,1500,1600,1700,344,1123]:         print 'Feature : ',X[i],'\t Real Digit :',Y[i],'\tPredicted Digit',prediction[i]         print '********************************'     print '\n\n\n'     # Print Accuracy Test     from sklearn.metrics import accuracy_score     print 'Accuracy Check ',accuracy_score(prediction,Y)*100,'%  Wow _/\_ that is GOOD :)'

Output :
 Result with approximately 98% accuracy

#### 1 comment:

1. I really appreciate information shared above. It’s of great help. If someone want to learn Online (Virtual) instructor lead live training in Data Science with Python , kindly contact us http://www.maxmunus.com/contact
MaxMunus Offer World Class Virtual Instructor led training on TECHNOLOGY. We have industry expert trainer. We provide Training Material and Software Support. MaxMunus has successfully conducted 100000+ trainings in India, USA, UK, Australlia, Switzerland, Qatar, Saudi Arabia, Bangladesh, Bahrain and UAE etc.