SVM支持向量機是建立于統計學習理論上的一種分類算法,適合與處理具備高維特征的數據集。
SVM算法的數學原理相對比較復雜,好在由于SVM算法的研究與應用如此火爆,CSDN博客里也有大量的好文章對此進行分析,下面給出幾個本人認為講解的相當不錯的:
支持向量機通俗導論(理解SVM的3層境界)
JULY大牛講的是如此詳細,由淺入深層層推進,以至于關于SVM的原理,我一個字都不想寫了。。強烈推薦。
還有一個比較通俗的簡單版本的:手把手教你實現SVM算法
SVN原理比較復雜,但是思想很簡單,一句話概括,就是通過某種核函數,將數據在高維空間里尋找一個最優超平面,能夠將兩類數據分開。
針對不同數據集,不同的核函數的分類效果可能完全不一樣。可選的核函數有這么幾種:
線性函數:形如K(x,y)=x*y這樣的線性函數;
多項式函數:形如K(x,y)=[(x·y)+1]^d這樣的多項式函數;
徑向基函數:形如K(x,y)=exp(-|x-y|^2/d^2)這樣的指數函數;
Sigmoid函數:就是上一篇文章中講到的Sigmoid函數。
我們就利用之前的幾個數據集,直接給出Python代碼,看看運行效果:
測試1:身高體重數據
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
|
# -*- coding: utf-8 -*- import numpy as np import scipy as sp from sklearn import svm from sklearn.cross_validation import train_test_split import matplotlib.pyplot as plt data = [] labels = [] with open ( "data\\1.txt" ) as ifile: for line in ifile: tokens = line.strip().split( ' ' ) data.append([ float (tk) for tk in tokens[: - 1 ]]) labels.append(tokens[ - 1 ]) x = np.array(data) labels = np.array(labels) y = np.zeros(labels.shape) y[labels = = 'fat' ] = 1 x_train, x_test, y_train, y_test = train_test_split(x, y, test_size = 0.0 ) h = . 02 # create a mesh to plot in x_min, x_max = x_train[:, 0 ]. min () - 0.1 , x_train[:, 0 ]. max () + 0.1 y_min, y_max = x_train[:, 1 ]. min () - 1 , x_train[:, 1 ]. max () + 1 xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) ''''' SVM ''' # title for the plots titles = [ 'LinearSVC (linear kernel)' , 'SVC with polynomial (degree 3) kernel' , 'SVC with RBF kernel' , 'SVC with Sigmoid kernel' ] clf_linear = svm.SVC(kernel = 'linear' ).fit(x, y) #clf_linear = svm.LinearSVC().fit(x, y) clf_poly = svm.SVC(kernel = 'poly' , degree = 3 ).fit(x, y) clf_rbf = svm.SVC().fit(x, y) clf_sigmoid = svm.SVC(kernel = 'sigmoid' ).fit(x, y) for i, clf in enumerate ((clf_linear, clf_poly, clf_rbf, clf_sigmoid)): answer = clf.predict(np.c_[xx.ravel(), yy.ravel()]) print (clf) print (np.mean( answer = = y_train)) print (answer) print (y_train) plt.subplot( 2 , 2 , i + 1 ) plt.subplots_adjust(wspace = 0.4 , hspace = 0.4 ) # Put the result into a color plot z = answer.reshape(xx.shape) plt.contourf(xx, yy, z, cmap = plt.cm.Paired, alpha = 0.8 ) # Plot also the training points plt.scatter(x_train[:, 0 ], x_train[:, 1 ], c = y_train, cmap = plt.cm.Paired) plt.xlabel(u '身高' ) plt.ylabel(u '體重' ) plt.xlim(xx. min (), xx. max ()) plt.ylim(yy. min (), yy. max ()) plt.xticks(()) plt.yticks(()) plt.title(titles[i]) plt.show() |
運行結果如下:
可以看到,針對這個數據集,使用3次多項式核函數的SVM,得到的效果最好。
測試2:影評態度
下面看看SVM在康奈爾影評數據集上的表現:(代碼略)
SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0, degree=3, gamma=0.0, kernel='linear', max_iter=-1, probability=False, random_state=None,
shrinking=True, tol=0.001, verbose=False)
0.814285714286
SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0, degree=3, gamma=0.0, kernel='poly', max_iter=-1, probability=False, random_state=None, shrinking=True, tol=0.001, verbose=False)
0.492857142857
SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0, degree=3, gamma=0.0, kernel='rbf', max_iter=-1, probability=False, random_state=None, shrinking=True, tol=0.001, verbose=False)
0.492857142857
SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0, degree=3, gamma=0.0, kernel='sigmoid', max_iter=-1, probability=False, random_state=None,
shrinking=True, tol=0.001, verbose=False)
0.492857142857
可見在該數據集上,線性分類器效果最好。
測試3:圓形邊界
最后我們測試一個數據分類邊界為圓形的情況:圓形內為一類,原型外為一類。看這類非線性的數據SVM表現如何:
測試數據生成代碼如下所示:
1
2
3
4
5
6
7
8
9
10
11
12
13
|
''''' 數據生成 ''' h = 0.1 x_min, x_max = - 1 , 1 y_min, y_max = - 1 , 1 xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) n = xx.shape[ 0 ] * xx.shape[ 1 ] x = np.array([xx.T.reshape(n).T, xx.reshape(n)]).T y = (x[:, 0 ] * x[:, 0 ] + x[:, 1 ] * x[:, 1 ] < 0.8 ) y.reshape(xx.shape) x_train, x_test, y_train, y_test\ = train_test_split(x, y, test_size = 0.2 ) |
測試結果如下:
SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0, degree=3, gamma=0.0, kernel='linear', max_iter=-1, probability=False, random_state=None,
shrinking=True, tol=0.001, verbose=False)
0.65
SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0, degree=3, gamma=0.0, kernel='poly', max_iter=-1, probability=False, random_state=None,
shrinking=True, tol=0.001, verbose=False)
0.675
SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0, degree=3, gamma=0.0, kernel='rbf', max_iter=-1, probability=False, random_state=None,
shrinking=True, tol=0.001, verbose=False)
0.9625
SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0, degree=3, gamma=0.0, kernel='sigmoid', max_iter=-1, probability=False, random_state=None,
shrinking=True, tol=0.001, verbose=False)
0.65
可以看到,對于這種邊界,徑向基函數的SVM得到了近似完美的分類結果。而其他的分類器顯然束手無策。
以上就是本文的全部內容,希望對大家的學習有所幫助,也希望大家多多支持服務器之家。
原文鏈接:http://blog.csdn.net/lsldd/article/details/41581315