[关闭]
@EtoDemerzel 2017-11-22T12:30:47.000000Z 字数 6713 阅读 1844

机器学习week8 ex7 review

机器学习 吴恩达


这周学习K-means,并将其运用于图片压缩。


1 K-means clustering

先从二维的点开始,使用K-means进行分类。

1.1 Implement K-means

image_1bvhcvi2vd2v166h1nhlq9uqe5m.png-66.2kB
K-means步骤如上,在每次循环中,先对所有点更新分类,再更新每一类的中心坐标。

1.1.1 Finding closest centroids

对每个example,根据公式:
image_1bvhd7nem1lh31ijm1h0atjc1pm42j.png-7.6kB
找到距离它最近的centroid,并标记。若有数个距离相同且均为最近,任取一个即可。
代码如下:

  1. function idx = findClosestCentroids(X, centroids)
  2. %FINDCLOSESTCENTROIDS computes the centroid memberships for every example
  3. % idx = FINDCLOSESTCENTROIDS (X, centroids) returns the closest centroids
  4. % in idx for a dataset X where each row is a single example. idx = m x 1
  5. % vector of centroid assignments (i.e. each entry in range [1..K])
  6. %
  7. % Set K
  8. K = size(centroids, 1);
  9. % You need to return the following variables correctly.
  10. idx = zeros(size(X,1), 1);
  11. % ====================== YOUR CODE HERE ======================
  12. % Instructions: Go over every example, find its closest centroid, and store
  13. % the index inside idx at the appropriate location.
  14. % Concretely, idx(i) should contain the index of the centroid
  15. % closest to example i. Hence, it should be a value in the
  16. % range 1..K
  17. %
  18. % Note: You can use a for-loop over the examples to compute this.
  19. %
  20. for i = 1:size(X,1)
  21. dist = pdist([X(i,:);centroids])(:,1:K);
  22. [row, col] = find(dist == min(dist));
  23. idx(i) = col(1);
  24. end;

1.1.2 Compute centroid means

对每个centroid,根据公式:
image_1bvhddrbtfpl7kh1mdifn416er3g.png-6kB
求出该类所有点的平均值(即中心点)进行更新。
代码如下:

  1. function centroids = computeCentroids(X, idx, K)
  2. %COMPUTECENTROIDS returns the new centroids by computing the means of the
  3. %data points assigned to each centroid.
  4. % centroids = COMPUTECENTROIDS(X, idx, K) returns the new centroids by
  5. % computing the means of the data points assigned to each centroid. It is
  6. % given a dataset X where each row is a single data point, a vector
  7. % idx of centroid assignments (i.e. each entry in range [1..K]) for each
  8. % example, and K, the number of centroids. You should return a matrix
  9. % centroids, where each row of centroids is the mean of the data points
  10. % assigned to it.
  11. %
  12. % Useful variables
  13. [m n] = size(X);
  14. % You need to return the following variables correctly.
  15. centroids = zeros(K, n);
  16. % ====================== YOUR CODE HERE ======================
  17. % Instructions: Go over every centroid and compute mean of all points that
  18. % belong to it. Concretely, the row vector centroids(i, :)
  19. % should contain the mean of the data points assigned to
  20. % centroid i.
  21. %
  22. % Note: You can use a for-loop over the centroids to compute this.
  23. %
  24. for i = 1:K
  25. centroids(i,:) = mean(X(find(idx == i),:));
  26. end;
  27. % =============================================================
  28. end

1.2 K-means on example dataset

ex7.m中提供了一个例子,其中中 K 已经被手动初始化过了。

  1. % Settings for running K-Means
  2. K = 3;
  3. max_iters = 10;
  4. % For consistency, here we set centroids to specific values
  5. % but in practice you want to generate them automatically, such as by
  6. % settings them to be random examples (as can be seen in
  7. % kMeansInitCentroids).
  8. initial_centroids = [3 3; 6 2; 8 5];

如上,我们要把点分成三类,迭代次数为10次。三类的中心点初始化为.
得到如下图像。(中间的图像略去,只展示开始和完成时的图像)
这是初始图像:
image_1bvhdo54a1obu1c561jua1uui7uk3t.png-38.9kB

进行10次迭代后的图像:
image_1bvhdoub019fjfj0en71k6o8ri4a.png-44.7kB
可以看到三堆点被很好地分成了三类。图片上同时也展示了中心点的移动轨迹。

1.3 Random initialization

ex7.m中为了方便检验结果正确性,给定了K的初始化。而实际应用中,我们需要随机初始化
完成如下代码:

  1. function centroids = kMeansInitCentroids(X, K)
  2. %KMEANSINITCENTROIDS This function initializes K centroids that are to be
  3. %used in K-Means on the dataset X
  4. % centroids = KMEANSINITCENTROIDS(X, K) returns K initial centroids to be
  5. % used with the K-Means on the dataset X
  6. %
  7. % You should return this values correctly
  8. centroids = zeros(K, size(X, 2));
  9. % ====================== YOUR CODE HERE ======================
  10. % Instructions: You should set centroids to randomly chosen examples from
  11. % the dataset X
  12. %
  13. % Initialize the centroids to be random examples
  14. % Randomly reorder the indices of examples
  15. randidx = randperm(size(X, 1));
  16. % Take the first K examples as centroids
  17. centroids = X(randidx(1:K), :);
  18. % =============================================================
  19. end

这样初始的中心点就是从X中随机选择的K个点。

1.4 Image compression with K-means

K-means进行图片压缩。
用一张的图片为例,采用RGB,总共需要个bit。
这里我们对他进行压缩,把所有颜色分成16类,以其centroid对应的颜色代替整个一类中的颜色,可以将空间压缩至 个bit。
用题目中提供的例子,效果大概如下:
image_1bvheqvbshfk1j1318ll1pis15lo5n.png-145.9kB

1.5 (Ungraded)Use your own image

随便找一张本地图片,先用PS调整大小,最好在 以下(否则速度会很慢),运行,效果如下:
image_1bvhf4qjv1jk0aqptbsrpkpoo64.png-204.7kB


2 Principal component analysis

我们使用PCA来减少向量维数。

2.1 Example dataset

先对例子中的二维向量实现降低到一维。
绘制散点图如下:
image_1bvhfjdid1ri0ihgghc12qh5826h.png-22.5kB

2.2 Implementing PCA

首先需要计算数据的协方差矩阵(covariance matrix)
然后使用 Octave/MATLAB中的SVD函数计算特征向量(eigenvector)

可以先对数据进行normalizationfeature scaling的处理。
协方差矩阵如下计算:
image_1bvhi4bs61c8m169qagiaf4osb7b.png-3.8kB
然后用SVD函数求特征向量
故完成pca.m如下:

  1. function [U, S] = pca(X)
  2. %PCA Run principal component analysis on the dataset X
  3. % [U, S, X] = pca(X) computes eigenvectors of the covariance matrix of X
  4. % Returns the eigenvectors U, the eigenvalues (on diagonal) in S
  5. %
  6. % Useful values
  7. [m, n] = size(X);
  8. % You need to return the following variables correctly.
  9. U = zeros(n);
  10. S = zeros(n);
  11. % ====================== YOUR CODE HERE ======================
  12. % Instructions: You should first compute the covariance matrix. Then, you
  13. % should use the "svd" function to compute the eigenvectors
  14. % and eigenvalues of the covariance matrix.
  15. %
  16. % Note: When computing the covariance matrix, remember to divide by m (the
  17. % number of examples).
  18. %
  19. [U,S,V] = svd(1/m * X' * X);
  20. % =========================================================================
  21. end

把求出的特征向量绘制在图上:

image_1bvhivt7a1c3pijbr46183r1d5k88.png-24.3kB

2.3 Dimensionality reduction with PCA

将高维的examples投影到低维上。

2.3.1 Projecting the data onto the principal components

完成projectData.m如下:

  1. function Z = projectData(X, U, K)
  2. %PROJECTDATA Computes the reduced data representation when projecting only
  3. %on to the top k eigenvectors
  4. % Z = projectData(X, U, K) computes the projection of
  5. % the normalized inputs X into the reduced dimensional space spanned by
  6. % the first K columns of U. It returns the projected examples in Z.
  7. %
  8. % You need to return the following variables correctly.
  9. Z = zeros(size(X, 1), K);
  10. % ====================== YOUR CODE HERE ======================
  11. % Instructions: Compute the projection of the data using only the top K
  12. % eigenvectors in U (first K columns).
  13. % For the i-th example X(i,:), the projection on to the k-th
  14. % eigenvector is given as follows:
  15. % x = X(i, :)';
  16. % projection_k = x' * U(:, k);
  17. %
  18. Ureduce = U(:,1:K);
  19. Z = X * Ureduce;
  20. % =============================================================
  21. end

X投影到K维空间上。

2.3.2 Reconstructing an approximation of the data

从投影过的低维恢复高维:

  1. function X_rec = recoverData(Z, U, K)
  2. %RECOVERDATA Recovers an approximation of the original data when using the
  3. %projected data
  4. % X_rec = RECOVERDATA(Z, U, K) recovers an approximation the
  5. % original data that has been reduced to K dimensions. It returns the
  6. % approximate reconstruction in X_rec.
  7. %
  8. % You need to return the following variables correctly.
  9. X_rec = zeros(size(Z, 1), size(U, 1));
  10. % ====================== YOUR CODE HERE ======================
  11. % Instructions: Compute the approximation of the data by projecting back
  12. % onto the original space using the top K eigenvectors in U.
  13. %
  14. % For the i-th example Z(i,:), the (approximate)
  15. % recovered data for dimension j is given as follows:
  16. % v = Z(i, :)';
  17. % recovered_j = v' * U(j, 1:K)';
  18. %
  19. % Notice that U(j, 1:K) is a row vector.
  20. %
  21. Ureduce = U(:, 1:K);
  22. X_rec = Z * Ureduce';
  23. % =============================================================
  24. end

2.3.3 Visualizing the projections

image_1bvhq0b1j1cj36k6el8ars19rb8l.png-30.9kB
根据上图可以看出,恢复后的图只保留了其中一个特征向量上的信息,而垂直方向的信息丢失了。

2.4 Face image dataset

对人脸图片进行dimension reductionex7faces.mat中存有大量人脸的灰度图() , 因此每一个向量的维数是
如下是前一百张人脸图:

image_1bvhq9lgl8tr18sc1nj91ek983r9i.png-122.3kB

2.4.1 PCA on faces

用PCA得到其主成分,将其重新转化为 的矩阵后,对其可视化,如下:(只展示前36个)
image_1bvhqqr4a1ha7vdi1p61ufgh0e9v.png-60.6kB

2.4.2 Dimensionality reduction

取前100个特征向量进行投影,
image_1bvhqupea1jc681t1g6018eb10leac.png-248.8kB
可以看出,降低维度后,人脸部的大致框架还保留着,但是失去了一些细节。这给我们的启发是,当我们在用神经网络训练人脸识别时,有时候可以用这种方式来提高速度。

2.5 Optional (Ungraded) exercise: PCA for visualization

PCA常用于高维向量的可视化。
如下图,用K-means对三维空间上的点进行分类。
image_1bvhr80pjnc28a612ed14tamr3ap.png-67.9kB

对图片进行旋转,可以看出这些点大致在一个平面上
image_1bvhr9938i8t146s1g8ik041bmlb6.png-60kB
因此我们使用PCA将其降低到二维,并观察散点图:
image_1bvhra6o029315am1tv5irrl8dbj.png-67.5kB

这样就更利于观察分类的情况了。

添加新批注
在作者公开此批注前,只有你和作者可见。
回复批注