[关闭]
@mShuaiZhao 2018-02-04T06:25:40.000000Z 字数 22030 阅读 831

week01.Ng's Sequence Model Course Homework-1

2018.02 Coursera


Building your Recurrent Neural Network - Step by Step

Welcome to Course 5's first assignment! In this assignment, you will implement your first Recurrent Neural Network in numpy.

Recurrent Neural Networks (RNN) are very effective for Natural Language Processing and other sequence tasks because they have "memory". They can read inputs (such as words) one at a time, and remember some information/context through the hidden layer activations that get passed from one time-step to the next. This allows a uni-directional RNN to take information from the past to process later inputs. A bidirection RNN can take context from both the past and the future.

Notation:
- Superscript denotes an object associated with the layer.
- Example: is the layer activation. and are the layer parameters.

We assume that you are already familiar with numpy and/or have completed the previous courses of the specialization. Let's get started!

  1. import numpy as np
  2. from rnn_utils import *

1 - Forward propagation for the basic Recurrent Neural Network

Later this week, you will generate music using an RNN. The basic RNN that you will implement has the structure below. In this example, .

RNN.png-58.1kB

Figure 1: Basic RNN model

Here's how you can implement an RNN:

Steps:
1. Implement the calculations needed for one time-step of the RNN.
2. Implement a loop over time-steps in order to process all the inputs, one at a time.

Let's go!

1.1 - RNN cell

A Recurrent neural network can be seen as the repetition of a single cell. You are first going to implement the computations for a single time-step. The following figure describes the operations for a single time-step of an RNN cell.

rnn_step_forward.png-138.7kB

Figure 2: Basic RNN cell. Takes as input (current input) and (previous hidden state containing information from the past), and outputs which is given to the next RNN cell and also used to predict

Exercise: Implement the RNN-cell described in Figure (2).

Instructions:
1. Compute the hidden state with tanh activation: .
2. Using your new hidden state , compute the prediction . We provided you a function: softmax.
3. Store in cache
4. Return , and cache

We will vectorize over examples. Thus, will have dimension , and will have dimension .

原始代码

  1. # GRADED FUNCTION: rnn_cell_forward
  2. def rnn_cell_forward(xt, a_prev, parameters):
  3. """
  4. Implements a single forward step of the RNN-cell as described in Figure (2)
  5. Arguments:
  6. xt -- your input data at timestep "t", numpy array of shape (n_x, m).
  7. a_prev -- Hidden state at timestep "t-1", numpy array of shape (n_a, m)
  8. parameters -- python dictionary containing:
  9. Wax -- Weight matrix multiplying the input, numpy array of shape (n_a, n_x)
  10. Waa -- Weight matrix multiplying the hidden state, numpy array of shape (n_a, n_a)
  11. Wya -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)
  12. ba -- Bias, numpy array of shape (n_a, 1)
  13. by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)
  14. Returns:
  15. a_next -- next hidden state, of shape (n_a, m)
  16. yt_pred -- prediction at timestep "t", numpy array of shape (n_y, m)
  17. cache -- tuple of values needed for the backward pass, contains (a_next, a_prev, xt, parameters)
  18. """
  19. # Retrieve parameters from "parameters"
  20. Wax = parameters["Wax"]
  21. Waa = parameters["Waa"]
  22. Wya = parameters["Wya"]
  23. ba = parameters["ba"]
  24. by = parameters["by"]
  25. ### START CODE HERE ### (≈2 lines)
  26. # compute next activation state using the formula given above
  27. a_next = None
  28. # compute output of the current cell using the formula given above
  29. yt_pred = None
  30. ### END CODE HERE ###
  31. # store values you need for backward propagation in cache
  32. cache = (a_next, a_prev, xt, parameters)
  33. return a_next, yt_pred, cache

完成代码

  1. # GRADED FUNCTION: rnn_cell_forward
  2. def rnn_cell_forward(xt, a_prev, parameters):
  3. """
  4. Implements a single forward step of the RNN-cell as described in Figure (2)
  5. Arguments:
  6. xt -- your input data at timestep "t", numpy array of shape (n_x, m).
  7. a_prev -- Hidden state at timestep "t-1", numpy array of shape (n_a, m)
  8. parameters -- python dictionary containing:
  9. Wax -- Weight matrix multiplying the input, numpy array of shape (n_a, n_x)
  10. Waa -- Weight matrix multiplying the hidden state, numpy array of shape (n_a, n_a)
  11. Wya -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)
  12. ba -- Bias, numpy array of shape (n_a, 1)
  13. by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)
  14. Returns:
  15. a_next -- next hidden state, of shape (n_a, m)
  16. yt_pred -- prediction at timestep "t", numpy array of shape (n_y, m)
  17. cache -- tuple of values needed for the backward pass, contains (a_next, a_prev, xt, parameters)
  18. """
  19. # Retrieve parameters from "parameters"
  20. Wax = parameters["Wax"]
  21. Waa = parameters["Waa"]
  22. Wya = parameters["Wya"]
  23. ba = parameters["ba"]
  24. by = parameters["by"]
  25. ### START CODE HERE ### (≈2 lines)
  26. # compute next activation state using the formula given above
  27. a_next = np.tanh( np.matmul(Wax, xt) + np.matmul(Waa, a_prev) + ba )
  28. # compute output of the current cell using the formula given above
  29. yt_pred = softmax( np.matmul(Wya, a_next) + by )
  30. ### END CODE HERE ###
  31. # store values you need for backward propagation in cache
  32. cache = (a_next, a_prev, xt, parameters)
  33. return a_next, yt_pred, cache

测试代码

  1. np.random.seed(1)
  2. xt = np.random.randn(3,10)
  3. a_prev = np.random.randn(5,10)
  4. Waa = np.random.randn(5,5)
  5. Wax = np.random.randn(5,3)
  6. Wya = np.random.randn(2,5)
  7. ba = np.random.randn(5,1)
  8. by = np.random.randn(2,1)
  9. parameters = {"Waa": Waa, "Wax": Wax, "Wya": Wya, "ba": ba, "by": by}
  10. a_next, yt_pred, cache = rnn_cell_forward(xt, a_prev, parameters)
  11. print("a_next[4] = ", a_next[4])
  12. print("a_next.shape = ", a_next.shape)
  13. print("yt_pred[1] =", yt_pred[1])
  14. print("yt_pred.shape = ", yt_pred.shape)

测试结果

  1. a_next[4] = [ 0.59584544 0.18141802 0.61311866 0.99808218 0.85016201 0.99980978
  2. -0.18887155 0.99815551 0.6531151 0.82872037]
  3. a_next.shape = (5, 10)
  4. yt_pred[1] = [ 0.9888161 0.01682021 0.21140899 0.36817467 0.98988387 0.88945212
  5. 0.36920224 0.9966312 0.9982559 0.17746526]
  6. yt_pred.shape = (2, 10)

1.2 - RNN forward pass

You can see an RNN as the repetition of the cell you've just built. If your input sequence of data is carried over 10 time steps, then you will copy the RNN cell 10 times. Each cell takes as input the hidden state from the previous cell () and the current time-step's input data (). It outputs a hidden state () and a prediction () for this time-step.

rnn (1).png-93.4kB

Figure 3: Basic RNN. The input sequence is carried over time steps. The network outputs .

Exercise: Code the forward propagation of the RNN described in Figure (3).

Instructions:
1. Create a vector of zeros () that will store all the hidden states computed by the RNN.
2. Initialize the "next" hidden state as (initial hidden state).
3. Start looping over each time step, your incremental index is :
- Update the "next" hidden state and the cache by running rnn_cell_forward
- Store the "next" hidden state in ( position)
- Store the prediction in y
- Add the cache to the list of caches
4. Return , and caches

完成代码

  1. # GRADED FUNCTION: rnn_forward
  2. def rnn_forward(x, a0, parameters):
  3. """
  4. Implement the forward propagation of the recurrent neural network described in Figure (3).
  5. Arguments:
  6. x -- Input data for every time-step, of shape (n_x, m, T_x).
  7. a0 -- Initial hidden state, of shape (n_a, m)
  8. parameters -- python dictionary containing:
  9. Waa -- Weight matrix multiplying the hidden state, numpy array of shape (n_a, n_a)
  10. Wax -- Weight matrix multiplying the input, numpy array of shape (n_a, n_x)
  11. Wya -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)
  12. ba -- Bias numpy array of shape (n_a, 1)
  13. by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)
  14. Returns:
  15. a -- Hidden states for every time-step, numpy array of shape (n_a, m, T_x)
  16. y_pred -- Predictions for every time-step, numpy array of shape (n_y, m, T_x)
  17. caches -- tuple of values needed for the backward pass, contains (list of caches, x)
  18. """
  19. # Initialize "caches" which will contain the list of all caches
  20. caches = []
  21. # Retrieve dimensions from shapes of x and Wy
  22. n_x, m, T_x = x.shape
  23. n_y, n_a = parameters["Wya"].shape
  24. ### START CODE HERE ###
  25. # initialize "a" and "y" with zeros (≈2 lines)
  26. a = np.zeros( (n_a, m, T_x) )
  27. y_pred = np.zeros( (n_y, m, T_x) )
  28. # Initialize a_next (≈1 line)
  29. a_next = a0
  30. # loop over all time-steps
  31. for t in range( T_x ):
  32. # Update next hidden state, compute the prediction, get the cache (≈1 line)
  33. a_next, yt_pred, cache = rnn_cell_forward(x[:, :, t], a_next, parameters)
  34. # Save the value of the new "next" hidden state in a (≈1 line)
  35. a[:,:,t] = a_next
  36. # Save the value of the prediction in y (≈1 line)
  37. y_pred[:,:,t] = yt_pred
  38. # Append "cache" to "caches" (≈1 line)
  39. caches.append( cache )
  40. ### END CODE HERE ###
  41. # store values needed for backward propagation in cache
  42. caches = (caches, x)
  43. return a, y_pred, caches

测试代码和结果

  1. np.random.seed(1)
  2. x = np.random.randn(3,10,4)
  3. a0 = np.random.randn(5,10)
  4. Waa = np.random.randn(5,5)
  5. Wax = np.random.randn(5,3)
  6. Wya = np.random.randn(2,5)
  7. ba = np.random.randn(5,1)
  8. by = np.random.randn(2,1)
  9. parameters = {"Waa": Waa, "Wax": Wax, "Wya": Wya, "ba": ba, "by": by}
  10. a, y_pred, caches = rnn_forward(x, a0, parameters)
  11. print("a[4][4] = ", a[4][5])
  12. print("a.shape = ", a.shape)
  13. print("y_pred[1][6] =", y_pred[1][7])
  14. print("y_pred.shape = ", y_pred.shape)
  15. print("caches[1][8][3] =", caches[1][9][3])
  16. print("len(caches) = ", len(caches))
  1. a[4][10] = [-0.99999375 0.77911235 -0.99861469 -0.99833267]
  2. a.shape = (5, 10, 4)
  3. y_pred[1][11] = [ 0.79560373 0.86224861 0.11118257 0.81515947]
  4. y_pred.shape = (2, 10, 4)
  5. caches[1][12][3] = [-1.1425182 -0.34934272 -0.20889423 0.58662319]
  6. len(caches) = 2

Congratulations! You've successfully built the forward propagation of a recurrent neural network from scratch. This will work well enough for some applications, but it suffers from vanishing gradient problems. So it works best when each output can be estimated using mainly "local" context (meaning information from inputs where is not too far from ).

In the next part, you will build a more complex LSTM model, which is better at addressing vanishing gradients. The LSTM will be better able to remember a piece of information and keep it saved for many timesteps.

2 - Long Short-Term Memory (LSTM) network

This following figure shows the operations of an LSTM-cell.

LSTM.png-158.2kB

Figure 4: LSTM-cell. This tracks and updates a "cell state" or memory variable at every time-step, which can be different from .

Similar to the RNN example above, you will start by implementing the LSTM cell for a single time-step. Then you can iteratively call it from inside a for-loop to have it process an input with time-steps.

About the gates

- Forget gate

For the sake of this illustration, lets assume we are reading words in a piece of text, and want use an LSTM to keep track of grammatical structures, such as whether the subject is singular or plural. If the subject changes from a singular word to a plural word, we need to find a way to get rid of our previously stored memory value of the singular/plural state. In an LSTM, the forget gate lets us do this:

Here, are weights that govern the forget gate's behavior. We concatenate and multiply by . The equation above results in a vector with values between 0 and 1. This forget gate vector will be multiplied element-wise by the previous cell state . So if one of the values of is 0 (or close to 0) then it means that the LSTM should remove that piece of information (e.g. the singular subject) in the corresponding component of . If one of the values is 1, then it will keep the information.

- Update gate

Once we forget that the subject being discussed is singular, we need to find a way to update it to reflect that the new subject is now plural. Here is the formulat for the update gate:

Similar to the forget gate, here is again a vector of values between 0 and 1. This will be multiplied element-wise with , in order to compute .

- Updating the cell

To update the new subject we need to create a new vector of numbers that we can add to our previous cell state. The equation we use is:

Finally, the new cell state is:

- Output gate

To decide which outputs we will use, we will use the following two formulas:


Where in equation 5 you decide what to output using a sigmoid function and in equation 6 you multiply that by the of the previous state.

2.1 - LSTM cell

Exercise: Implement the LSTM cell described in the Figure (3).

Instructions:
1. Concatenate and in a single matrix:
2. Compute all the formulas 1-6. You can use sigmoid() (provided) and np.tanh().
3. Compute the prediction . You can use softmax() (provided).

完成代码

  1. # GRADED FUNCTION: lstm_cell_forward
  2. def lstm_cell_forward(xt, a_prev, c_prev, parameters):
  3. """
  4. Implement a single forward step of the LSTM-cell as described in Figure (4)
  5. Arguments:
  6. xt -- your input data at timestep "t", numpy array of shape (n_x, m).
  7. a_prev -- Hidden state at timestep "t-1", numpy array of shape (n_a, m)
  8. c_prev -- Memory state at timestep "t-1", numpy array of shape (n_a, m)
  9. parameters -- python dictionary containing:
  10. Wf -- Weight matrix of the forget gate, numpy array of shape (n_a, n_a + n_x)
  11. bf -- Bias of the forget gate, numpy array of shape (n_a, 1)
  12. Wi -- Weight matrix of the update gate, numpy array of shape (n_a, n_a + n_x)
  13. bi -- Bias of the update gate, numpy array of shape (n_a, 1)
  14. Wc -- Weight matrix of the first "tanh", numpy array of shape (n_a, n_a + n_x)
  15. bc -- Bias of the first "tanh", numpy array of shape (n_a, 1)
  16. Wo -- Weight matrix of the output gate, numpy array of shape (n_a, n_a + n_x)
  17. bo -- Bias of the output gate, numpy array of shape (n_a, 1)
  18. Wy -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)
  19. by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)
  20. Returns:
  21. a_next -- next hidden state, of shape (n_a, m)
  22. c_next -- next memory state, of shape (n_a, m)
  23. yt_pred -- prediction at timestep "t", numpy array of shape (n_y, m)
  24. cache -- tuple of values needed for the backward pass, contains (a_next, c_next, a_prev, c_prev, xt, parameters)
  25. Note: ft/it/ot stand for the forget/update/output gates, cct stands for the candidate value (c tilde),
  26. c stands for the memory value
  27. """
  28. # Retrieve parameters from "parameters"
  29. Wf = parameters["Wf"]
  30. bf = parameters["bf"]
  31. Wi = parameters["Wi"]
  32. bi = parameters["bi"]
  33. Wc = parameters["Wc"]
  34. bc = parameters["bc"]
  35. Wo = parameters["Wo"]
  36. bo = parameters["bo"]
  37. Wy = parameters["Wy"]
  38. by = parameters["by"]
  39. # Retrieve dimensions from shapes of xt and Wy
  40. n_x, m = xt.shape
  41. n_y, n_a = Wy.shape
  42. ### START CODE HERE ###
  43. # Concatenate a_prev and xt (≈3 lines)
  44. #concat = np.zeros( (n_a + n_x, m) )
  45. #concat[: n_a, :] = a_prev
  46. #concat[n_a :, :] = xt
  47. concat = np.vstack( (a_prev, xt) )
  48. # Compute values for ft, it, cct, c_next, ot, a_next using the formulas given figure (4) (≈6 lines)
  49. ft = sigmoid( np.matmul(Wf, concat) + bf )
  50. it = sigmoid( np.matmul(Wi, concat) + bi )
  51. cct = np.tanh( np.matmul(Wc, concat) + bc )
  52. c_next = ft * c_prev + it * cct
  53. ot = sigmoid(np.matmul(Wo, concat) + bo)
  54. a_next = ot * np.tanh(c_next)
  55. # Compute prediction of the LSTM cell (≈1 line)
  56. yt_pred = softmax(np.matmul(Wy,a_next) + by)
  57. ### END CODE HERE ###
  58. # store values needed for backward propagation in cache
  59. cache = (a_next, c_next, a_prev, c_prev, ft, it, cct, ot, xt, parameters)
  60. return a_next, c_next, yt_pred, cache

测试代码

  1. np.random.seed(1)
  2. xt = np.random.randn(3,10)
  3. a_prev = np.random.randn(5,10)
  4. c_prev = np.random.randn(5,10)
  5. Wf = np.random.randn(5, 5+3)
  6. bf = np.random.randn(5,1)
  7. Wi = np.random.randn(5, 5+3)
  8. bi = np.random.randn(5,1)
  9. Wo = np.random.randn(5, 5+3)
  10. bo = np.random.randn(5,1)
  11. Wc = np.random.randn(5, 5+3)
  12. bc = np.random.randn(5,1)
  13. Wy = np.random.randn(2,5)
  14. by = np.random.randn(2,1)
  15. parameters = {"Wf": Wf, "Wi": Wi, "Wo": Wo, "Wc": Wc, "Wy": Wy, "bf": bf, "bi": bi, "bo": bo, "bc": bc, "by": by}
  16. a_next, c_next, yt, cache = lstm_cell_forward(xt, a_prev, c_prev, parameters)
  17. print("a_next[4] = ", a_next[4])
  18. print("a_next.shape = ", c_next.shape)
  19. print("c_next[2] = ", c_next[2])
  20. print("c_next.shape = ", c_next.shape)
  21. print("yt[1] =", yt[1])
  22. print("yt.shape = ", yt.shape)
  23. print("cache[1][14] =", cache[1][15])
  24. print("len(cache) = ", len(cache))

测试结果

  1. a_next[4] = [-0.66408471 0.0036921 0.02088357 0.22834167 -0.85575339 0.00138482
  2. 0.76566531 0.34631421 -0.00215674 0.43827275]
  3. a_next.shape = (5, 10)
  4. c_next[2] = [ 0.63267805 1.00570849 0.35504474 0.20690913 -1.64566718 0.11832942
  5. 0.76449811 -0.0981561 -0.74348425 -0.26810932]
  6. c_next.shape = (5, 10)
  7. yt[1] = [ 0.79913913 0.15986619 0.22412122 0.15606108 0.97057211 0.31146381
  8. 0.00943007 0.12666353 0.39380172 0.07828381]
  9. yt.shape = (2, 10)
  10. cache[1][16] = [-0.16263996 1.03729328 0.72938082 -0.54101719 0.02752074 -0.30821874
  11. 0.07651101 -1.03752894 1.41219977 -0.37647422]
  12. len(cache) = 10

2.2 - Forward pass for LSTM

Now that you have implemented one step of an LSTM, you can now iterate this over this using a for-loop to process a sequence of inputs.

LSTM_rnn.png-84kB

Figure 4: LSTM over multiple time-steps.

Exercise: Implement lstm_forward() to run an LSTM over time-steps.

Note: is initialized with zeros.

完成代码

  1. # GRADED FUNCTION: lstm_forward
  2. def lstm_forward(x, a0, parameters):
  3. """
  4. Implement the forward propagation of the recurrent neural network using an LSTM-cell described in Figure (3).
  5. Arguments:
  6. x -- Input data for every time-step, of shape (n_x, m, T_x).
  7. a0 -- Initial hidden state, of shape (n_a, m)
  8. parameters -- python dictionary containing:
  9. Wf -- Weight matrix of the forget gate, numpy array of shape (n_a, n_a + n_x)
  10. bf -- Bias of the forget gate, numpy array of shape (n_a, 1)
  11. Wi -- Weight matrix of the update gate, numpy array of shape (n_a, n_a + n_x)
  12. bi -- Bias of the update gate, numpy array of shape (n_a, 1)
  13. Wc -- Weight matrix of the first "tanh", numpy array of shape (n_a, n_a + n_x)
  14. bc -- Bias of the first "tanh", numpy array of shape (n_a, 1)
  15. Wo -- Weight matrix of the output gate, numpy array of shape (n_a, n_a + n_x)
  16. bo -- Bias of the output gate, numpy array of shape (n_a, 1)
  17. Wy -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)
  18. by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)
  19. Returns:
  20. a -- Hidden states for every time-step, numpy array of shape (n_a, m, T_x)
  21. y -- Predictions for every time-step, numpy array of shape (n_y, m, T_x)
  22. caches -- tuple of values needed for the backward pass, contains (list of all the caches, x)
  23. """
  24. # Initialize "caches", which will track the list of all the caches
  25. caches = []
  26. ### START CODE HERE ###
  27. # Retrieve dimensions from shapes of x and Wy (≈2 lines)
  28. Wy = parameters["Wy"]
  29. n_x, m, T_x = x.shape
  30. n_y, n_a = Wy.shape
  31. # initialize "a", "c" and "y" with zeros (≈3 lines)
  32. a = np.zeros((n_a, m, T_x))
  33. c = np.zeros((n_a, m, T_x))
  34. y = np.zeros((n_y, m, T_x))
  35. # Initialize a_next and c_next (≈2 lines)
  36. a_next = np.zeros((n_a, m))
  37. c_next = np.zeros((n_a, m))
  38. # loop over all time-steps
  39. for t in range( T_x ):
  40. # Update next hidden state, next memory state, compute the prediction, get the cache (≈1 line)
  41. a_next, c_next, yt, cache = lstm_cell_forward(x[:, :, t], a_next, c_next, parameters)
  42. # Save the value of the new "next" hidden state in a (≈1 line)
  43. a[:,:,t] = a_next
  44. # Save the value of the prediction in y (≈1 line)
  45. y[:,:,t] = yt
  46. # Save the value of the next cell state (≈1 line)
  47. c[:,:,t] = c_next
  48. # Append the cache into caches (≈1 line)
  49. caches.append( cache )
  50. ### END CODE HERE ###
  51. # store values needed for backward propagation in cache
  52. caches = (caches, x)
  53. return a, y, c, caches

测试代码和结果

  1. np.random.seed(1)
  2. x = np.random.randn(3,10,7)
  3. a0 = np.random.randn(5,10)
  4. Wf = np.random.randn(5, 5+3)
  5. bf = np.random.randn(5,1)
  6. Wi = np.random.randn(5, 5+3)
  7. bi = np.random.randn(5,1)
  8. Wo = np.random.randn(5, 5+3)
  9. bo = np.random.randn(5,1)
  10. Wc = np.random.randn(5, 5+3)
  11. bc = np.random.randn(5,1)
  12. Wy = np.random.randn(2,5)
  13. by = np.random.randn(2,1)
  14. parameters = {"Wf": Wf, "Wi": Wi, "Wo": Wo, "Wc": Wc, "Wy": Wy, "bf": bf, "bi": bi, "bo": bo, "bc": bc, "by": by}
  15. a, y, c, caches = lstm_forward(x, a0, parameters)
  16. print("a[4][18][6] = ", a[4][19][6])
  17. print("a.shape = ", a.shape)
  18. print("y[1][20][3] =", y[1][21][3])
  19. print("y.shape = ", y.shape)
  20. print("caches[1][1[1]] =", caches[1][22][1])
  21. print("c[1][23][1]", c[1][24][1])
  22. print("len(caches) = ", len(caches))
  1. a[4][25][6] = 0.17863063279
  2. a.shape = (5, 10, 7)
  3. y[1][26][3] = 0.947005889841
  4. y.shape = (2, 10, 7)
  5. caches[1][1[1]] = [ 0.82797464 0.23009474 0.76201118 -0.22232814 -0.20075807 0.18656139
  6. 0.41005165]
  7. c[1][27][1] -0.634513645887
  8. len(caches) = 2

P.S. 这个测试结果和参考答案不一样...最后得了满分

参考答案

  1. Expected Output:
  2. a[4][28][6] = 0.172117767533
  3. a.shape = (5, 10, 7)
  4. y[1][29][3] = 0.95087346185
  5. y.shape = (2, 10, 7)
  6. caches[1][30][1] = [ 0.82797464 0.23009474 0.76201118 -0.22232814 -0.20075807 0.18656139 0.41005165]
  7. c[1][31][1] = -0.855544916718
  8. len(caches) = 2

Congratulations! You have now implemented the forward passes for the basic RNN and the LSTM. When using a deep learning framework, implementing the forward pass is sufficient to build systems that achieve great performance.

The rest of this notebook is optional, and will not be graded.

3 - Backpropagation in recurrent neural networks (OPTIONAL / UNGRADED)

In modern deep learning frameworks, you only have to implement the forward pass, and the framework takes care of the backward pass, so most deep learning engineers do not need to bother with the details of the backward pass. If however you are an expert in calculus and want to see the details of backprop in RNNs, you can work through this optional portion of the notebook.

When in an earlier course you implemented a simple (fully connected) neural network, you used backpropagation to compute the derivatives with respect to the cost to update the parameters. Similarly, in recurrent neural networks you can to calculate the derivatives with respect to the cost in order to update the parameters. The backprop equations are quite complicated and we did not derive them in lecture. However, we will briefly present them below.

3.1 - Basic RNN backward pass

We will start by computing the backward pass for the basic RNN-cell.

rnn_cell_backprop.png-145.8kB

Figure 5: RNN-cell's backward pass. Just like in a fully-connected neural network, the derivative of the cost function backpropagates through the RNN by following the chain-rule from calculas. The chain-rule is also used to calculate to update the parameters .

Deriving the one step backward functions:

To compute the rnn_cell_backward you need to compute the following equations. It is a good exercise to derive them by hand.

The derivative of is . You can find the complete proof here. Note that:

Similarly for , the derivative of is .

The final two equations also follow same rule and are derived using the derivative. Note that the arrangement is done in a way to get the same dimensions to match.

添加新批注
在作者公开此批注前,只有你和作者可见。
回复批注