[关闭]
@EtoDemerzel 2017-11-15T16:59:14.000000Z 字数 10701 阅读 2861

机器学习week7 ex6 review

吴恩达 机器学习



这周使用支持向量机(supprot vector machine)来做一个简单的垃圾邮件分类。

Support vector machine

1.1 Example dataset 1

ex6.m首先载入ex6data1.mat中的数据绘图:

  1. %% =============== Part 1: Loading and Visualizing Data ================
  2. % We start the exercise by first loading and visualizing the dataset.
  3. % The following code will load the dataset into your environment and plot
  4. % the data.
  5. %
  6. fprintf('Loading and Visualizing Data ...\n')
  7. % Load from ex6data1:
  8. % You will have X, y in your environment
  9. load('ex6data1.mat');
  10. % Plot training data
  11. plotData(X, y);
  12. fprintf('Program paused. Press enter to continue.\n');
  13. pause;

image_1busf0lcl12kq1sn5q7lmgp1act9.png-25kB

接着用已经写好的SVM来训练,取C=1:

  1. %% ==================== Part 2: Training Linear SVM ====================
  2. % The following code will train a linear SVM on the dataset and plot the
  3. % decision boundary learned.
  4. %
  5. % Load from ex6data1:
  6. % You will have X, y in your environment
  7. load('ex6data1.mat');
  8. fprintf('\nTraining Linear SVM ...\n')
  9. % You should try to change the C value below and see how the decision
  10. % boundary varies (e.g., try C = 1000)
  11. C = 1;
  12. model = svmTrain(X, y, C, @linearKernel, 1e-3, 20);
  13. visualizeBoundaryLinear(X, y, model);
  14. fprintf('Program paused. Press enter to continue.\n');
  15. pause;

image_1buspmri0d4ub6q1s1618591hosm.png-28.4kB
可以看到此时左上角的一个十字点被错误地分类了。

如果改变C=100,则得到如下图形:
image_1busqdu8e1map38s11ekv5i1atq1j.png-27.8kB
此时training example中的所有点都被正确分类了,但是这条线很显然并不正确。

1.2 SVM with Gaussian Kernels

1.2.1 Gaussian kernel

Gaussian kernel function:
image_1bust6v0v15ssv39a5c81olg3t.png-20.4kB
求两点距离在MATLAB中可以直接使用pdist函数,在Octave中如果下载了Octave-forge中的statistics包也可以使用。这里Octave下载package的方法。
故完成gaussiankernel.m代码如下:

  1. function sim = gaussianKernel(x1, x2, sigma)
  2. %RBFKERNEL returns a radial basis function kernel between x1 and x2
  3. % sim = gaussianKernel(x1, x2) returns a gaussian kernel between x1 and x2
  4. % and returns the value in sim
  5. % Ensure that x1 and x2 are column vectors
  6. x1 = x1(:); x2 = x2(:);
  7. % You need to return the following variables correctly.
  8. sim = 0;
  9. % ====================== YOUR CODE HERE ======================
  10. % Instructions: Fill in this function to return the similarity between x1
  11. % and x2 computed using a Gaussian kernel with bandwidth
  12. % sigma
  13. %
  14. %
  15. x = [x1';x2']; % make x1 x2 as row vectors and put together as a matrix
  16. dis = pdist(x); % using the pdist function in statistics package
  17. sim = exp(-dis^2/(2*sigma^2));
  18. % =============================================================
  19. end

1.2.2 Example Dataset 2

根据刚刚完成的求Gaussian Kernel的函数,利用已经写好的SVM可以对ex6data2.mat中的数据进行分类。
ex6.m先对数据进行可视化如下:
image_1buvcb5bc17ol171a1gefkim6i69.png-89.4kB

经过使用了RBF kernelSVM训练后,可以画出如下边界:
image_1buvcessf1jqi1kul1gnsc7ts2qm.png-104.1kB

1.2.3 Example Dataset 3

需要使用交叉验证集求出最合适的Csigma
Csigma的验证范围都是[0.01 0.03 0.1 0.3 1 3 10 30], 也就是说需要进行64次计算,来得到最合适的Csigma,并用于对训练集的计算中。
完成函数dataset3Params.m如下:

  1. function [C, sigma] = dataset3Params(X, y, Xval, yval)
  2. %EX6PARAMS returns your choice of C and sigma for Part 3 of the exercise
  3. %where you select the optimal (C, sigma) learning parameters to use for SVM
  4. %with RBF kernel
  5. % [C, sigma] = EX6PARAMS(X, y, Xval, yval) returns your choice of C and
  6. % sigma. You should complete this function to return the optimal C and
  7. % sigma based on a cross-validation set.
  8. %
  9. % You need to return the following variables correctly.
  10. C = 1;
  11. sigma = 0.3;
  12. % ====================== YOUR CODE HERE ======================
  13. % Instructions: Fill in this function to return the optimal C and sigma
  14. % learning parameters found using the cross validation set.
  15. % You can use svmPredict to predict the labels on the cross
  16. % validation set. For example,
  17. % predictions = svmPredict(model, Xval);
  18. % will return the predictions on the cross validation set.
  19. %
  20. % Note: You can compute the prediction error using
  21. % mean(double(predictions ~= yval))
  22. %
  23. train_values = [0.01 0.03 0.1 0.3 1 3 10 30];
  24. sigma_values = [0.01 0.03 0.1 0.3 1 3 10 30];
  25. for i=1:length(train_values)
  26. for j=1:length(train_values)
  27. C = train_values(i);
  28. sigma = train_values(j);
  29. model = svmTrain(X, y, C, @(x1, x2) gaussianKernel(x1, x2, sigma));
  30. predictions = svmPredict(model, Xval);
  31. predictions_error(i,j) = mean(double(predictions ~= yval));
  32. end
  33. end
  34. mm = min(min(predictions_error));
  35. [i j] = find(predictions_error == mm);
  36. C = train_values(i)
  37. sigma = train_values(j)
  38. % Answer is C = 1 and sigma = 0.1
  39. % =========================================================================
  40. end

最后得到使得cross validation error最小的Csigma分别为10.1

绘制训练集散点图如下:
image_1buvgf5lu12281106110h1k0dnipm.png-43.1kB

使用上述Csigma,求出边界如下图:
image_1buvg6duni9r1jo37utlv91iqf9.png-47.6kB


2. Spam classification

使用SVM进行垃圾邮件分类。
表示是垃圾邮件, 表示不是垃圾邮件。同时还需要将每封邮件转换成一个特征向量
数据集来自SpamAssassin Public Corpus。这次简化的分类中,不考虑邮件的标题,只对正文进行分类。

2.1 Preprocessing Emails

需要对邮件中的内容进行Normalization:
image_1buvhh03d73273tss427r1p0f1g.png-169.7kB
即将正文中所有字母换成小写、去掉所有HTML的tag、把所有URL替换成"httpaddr"、所有Email地址替换成"emailaddr",所有数字替换成"number"、所有美元符号 替换成"dollar"、单词词根化、去掉非单词字符(包括标点符号,tab newline spaces均用一个空格替代)。
例如一封邮件本来含有如下内容:
image_1buvopkb0eov3ir9vscvqj01t.png-48.2kB

经过上述处理后,则变为如下内容:
image_1buvorf5n1jug5i1g5i1k5l19pm2a.png-33.3kB

2.2 Vocabulary list

在这个简化的垃圾邮件分类中,我们只选择了最常用的单词。因为不常用的词汇只在很少一部分邮件中出现,如果增加它们作为feature,可能会导致overfitting。
在文件vocab.txt中包含了全部单词表,部分截图如下:
image_1buvp4p19ssds981bnp1akc1lp2n.png-49.8kB
这些词都是在spam corpus中出现超过100次的单词,共有1899个。在实际操作中,一般使用10000-50000的单词表。

补充processEmail.m的代码,完成后如下:

  1. function word_indices = processEmail(email_contents)
  2. %PROCESSEMAIL preprocesses a the body of an email and
  3. %returns a list of word_indices
  4. % word_indices = PROCESSEMAIL(email_contents) preprocesses
  5. % the body of an email and returns a list of indices of the
  6. % words contained in the email.
  7. %
  8. % Load Vocabulary
  9. vocabList = getVocabList();
  10. % Init return value
  11. word_indices = [];
  12. % ========================== Preprocess Email ===========================
  13. % Find the Headers ( \n\n and remove )
  14. % Uncomment the following lines if you are working with raw emails with the
  15. % full headers
  16. % hdrstart = strfind(email_contents, ([char(10) char(10)]));
  17. % email_contents = email_contents(hdrstart(1):end);
  18. % Lower case
  19. email_contents = lower(email_contents);
  20. % Strip all HTML
  21. % Looks for any expression that starts with < and ends with > and replace
  22. % and does not have any < or > in the tag it with a space
  23. email_contents = regexprep(email_contents, '<[^<>]+>', ' ');
  24. % Handle Numbers
  25. % Look for one or more characters between 0-9
  26. email_contents = regexprep(email_contents, '[0-9]+', 'number');
  27. % Handle URLS
  28. % Look for strings starting with http:// or https://
  29. email_contents = regexprep(email_contents, ...
  30. '(http|https)://[^\s]*', 'httpaddr');
  31. % Handle Email Addresses
  32. % Look for strings with @ in the middle
  33. email_contents = regexprep(email_contents, '[^\s]+@[^\s]+', 'emailaddr');
  34. % Handle $ sign
  35. email_contents = regexprep(email_contents, '[$]+', 'dollar');
  36. % ========================== Tokenize Email ===========================
  37. % Output the email to screen as well
  38. fprintf('\n==== Processed Email ====\n\n');
  39. % Process file
  40. l = 0;
  41. while ~isempty(email_contents)
  42. % Tokenize and also get rid of any punctuation
  43. [str, email_contents] = ...
  44. strtok(email_contents, ...
  45. [' @$/#.-:&*+=[]?!(){},''">_<;%' char(10) char(13)]);
  46. % Remove any non alphanumeric characters
  47. str = regexprep(str, '[^a-zA-Z0-9]', '');
  48. % Stem the word
  49. % (the porterStemmer sometimes has issues, so we use a try catch block)
  50. try str = porterStemmer(strtrim(str));
  51. catch str = ''; continue;
  52. end;
  53. % Skip the word if it is too short
  54. if length(str) < 1
  55. continue;
  56. end
  57. % Look up the word in the dictionary and add to word_indices if
  58. % found
  59. % ====================== YOUR CODE HERE ======================
  60. % Instructions: Fill in this function to add the index of str to
  61. % word_indices if it is in the vocabulary. At this point
  62. % of the code, you have a stemmed word from the email in
  63. % the variable str. You should look up str in the
  64. % vocabulary list (vocabList). If a match exists, you
  65. % should add the index of the word to the word_indices
  66. % vector. Concretely, if str = 'action', then you should
  67. % look up the vocabulary list to find where in vocabList
  68. % 'action' appears. For example, if vocabList{18} =
  69. % 'action', then, you should add 18 to the word_indices
  70. % vector (e.g., word_indices = [word_indices ; 18]; ).
  71. %
  72. % Note: vocabList{idx} returns a the word with index idx in the
  73. % vocabulary list.
  74. %
  75. % Note: You can use strcmp(str1, str2) to compare two strings (str1 and
  76. % str2). It will return 1 only if the two strings are equivalent.
  77. %
  78. index = find(strcmp(vocabList,str) == 1); % find the index of str in vocablist
  79. word_indices = [word_indices; index]; % add index to word_indices
  80. % =============================================================
  81. % Print to screen, ensuring that the output lines are not too long
  82. if (l + length(str) + 1) > 78
  83. fprintf('\n');
  84. l = 0;
  85. end
  86. fprintf('%s ', str);
  87. l = l + length(str) + 1;
  88. end
  89. % Print footer
  90. fprintf('\n\n=========================\n');
  91. end

经过处理的样例如下所示:
image_1buvr0vjctd3r1n1afu1oqm1a153h.png-28kB

Extracting features from Emails

将上述经过处理的邮件表示成特征向量 , 其中 , 分别代表单词表中的 号单词是否出现在了邮件中。即我们需要把一封邮件转化为如下的向量形式:
image_1buvs6vd91v634kg17ra10577ct3u.png-8.8kB
完成emailFeatures.m如下:

  1. function x = emailFeatures(word_indices)
  2. %EMAILFEATURES takes in a word_indices vector and produces a feature vector
  3. %from the word indices
  4. % x = EMAILFEATURES(word_indices) takes in a word_indices vector and
  5. % produces a feature vector from the word indices.
  6. % Total number of words in the dictionary
  7. n = 1899;
  8. % You need to return the following variables correctly.
  9. x = zeros(n, 1);
  10. % ====================== YOUR CODE HERE ======================
  11. % Instructions: Fill in this function to return a feature vector for the
  12. % given email (word_indices). To help make it easier to
  13. % process the emails, we have have already pre-processed each
  14. % email and converted each word in the email into an index in
  15. % a fixed dictionary (of 1899 words). The variable
  16. % word_indices contains the list of indices of the words
  17. % which occur in one email.
  18. %
  19. % Concretely, if an email has the text:
  20. %
  21. % The quick brown fox jumped over the lazy dog.
  22. %
  23. % Then, the word_indices vector for this text might look
  24. % like:
  25. %
  26. % 60 100 33 44 10 53 60 58 5
  27. %
  28. % where, we have mapped each word onto a number, for example:
  29. %
  30. % the -- 60
  31. % quick -- 100
  32. % ...
  33. %
  34. % (note: the above numbers are just an example and are not the
  35. % actual mappings).
  36. %
  37. % Your task is take one such word_indices vector and construct
  38. % a binary feature vector that indicates whether a particular
  39. % word occurs in the email. That is, x(i) = 1 when word i
  40. % is present in the email. Concretely, if the word 'the' (say,
  41. % index 60) appears in the email, then x(60) = 1. The feature
  42. % vector should look like:
  43. %
  44. % x = [ 0 0 0 0 1 0 0 0 ... 0 0 0 0 1 ... 0 0 0 1 0 ..];
  45. %
  46. %
  47. x(word_indices) = 1;
  48. % =========================================================================
  49. end

2.3 Training SVM for spam classification

训练集中提供了4000封非垃圾邮件和4000封垃圾邮件。而测试集中则有1000封邮件用于测试。
ex6_spam.m中使用SVM进行训练,并进行测试,分别输出训练集和测试集分类的准确率:

  1. %% =========== Part 3: Train Linear SVM for Spam Classification ========
  2. % In this section, you will train a linear classifier to determine if an
  3. % email is Spam or Not-Spam.
  4. % Load the Spam Email dataset
  5. % You will have X, y in your environment
  6. load('spamTrain.mat');
  7. fprintf('\nTraining Linear SVM (Spam Classification)\n')
  8. fprintf('(this may take 1 to 2 minutes) ...\n')
  9. C = 0.1;
  10. model = svmTrain(X, y, C, @linearKernel);
  11. p = svmPredict(model, X);
  12. fprintf('Training Accuracy: %f\n', mean(double(p == y)) * 100);
  13. %% ================= Part 5: Top Predictors of Spam ====================
  14. % Since the model we are training is a linear SVM, we can inspect the
  15. % weights learned by the model to understand better how it is determining
  16. % whether an email is spam or not. The following code finds the words with
  17. % the highest weights in the classifier. Informally, the classifier
  18. % 'thinks' that these words are the most likely indicators of spam.
  19. %
  20. % Sort the weights and obtin the vocabulary list
  21. [weight, idx] = sort(model.w, 'descend');
  22. vocabList = getVocabList();
  23. fprintf('\nTop predictors of spam: \n');
  24. for i = 1:15
  25. fprintf(' %-15s (%f) \n', vocabList{idx(i)}, weight(i));
  26. end
  27. fprintf('\n\n');
  28. fprintf('\nProgram paused. Press enter to continue.\n');
  29. pause;

image_1buvsmq7k1lt9j8daqp2rh1n3q4b.png-15.2kB
结果如上图,正确率分别高达99.8%和98.9%。

2.4 Top predictors for spam

image_1buvsp5rmmepd262kr4b1b3b4o.png-17.4kB
经过训练我们发现上述单词对分辨是否是垃圾邮件的帮助效果最明显。

2.5 Optional(ungraded) exercise: Try your own emails

使用spamSample1.txt中的垃圾邮件样例,在ex6_spam中测试:
image_1bv00sec51tqmjsm1g7d419n5h6c.png-40.1kB
检验结果:
image_1buvvbjoi19cpipqn3rssc1sp5i.png-32.8kB
判断是垃圾邮件,正确。
再找一封非垃圾邮件(这里使用emailSample2.txt),如下:
image_1bv012s0s1eif1kgm1gp2196l13d37m.png-65.3kB
检验结果:
image_1bv0119sf3ba1n3110eb1ii818j6p.png-52.1kB
判断不是垃圾邮件,正确。

2.6 Optional(ungraded) exercise: Build up your own dataset

可以在SpamAssassin Public Corpus下载到数据集自行训练。也可以根据自己的数据集中的高频单词,建立新的单词表。还可以使用高度优化的SVM工具箱,如LIBSVM,点击这里可以下载。

添加新批注
在作者公开此批注前,只有你和作者可见。
回复批注