Skip to content

What are the meanings of the variables which get from RNN #68

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
zxzStar opened this issue Sep 27, 2016 · 0 comments
Open

What are the meanings of the variables which get from RNN #68

zxzStar opened this issue Sep 27, 2016 · 0 comments

Comments

@zxzStar
Copy link

zxzStar commented Sep 27, 2016

Hi, Aymeric Damien, thank you so much for this examples, it's great and helpful.
I use your RNN (recurrent_network.py) to train my dataset and get a good result. I use this way to get all variables:

def RNN(x, weights, biases):
    x = tf.transpose(x, [1, 0, 2])
    x = tf.reshape(x, [-1, n_input])
    x = tf.split(0, n_steps, x)
    lstm_cell = rnn_cell.LSTMCell(n_hidden, use_peepholes=False, forget_bias=1.0)
    outputs, states = rnn.rnn(lstm_cell, x, dtype=tf.float32)
    return tf.matmul(outputs[-1], weights['out']) + biases['out']
all_variables = tf.get_collection(tf.GraphKeys.VARIABLES)

all_variable = sess.run(all_variables)
print all_variable

Then I can get 14 arrays which are all variables of the RNN. But I don't know their meanings. Could you help me? Thanks a lot!

My network parameters:

n_input = 140 
n_hidden = 32 
n_classes = 2 

The dimensions of 14 variables are as follows:

[0] 32*2(hidden->output?)
[1] 2
[2] 172*128  (n_input+n_hidden)*(n_hidden*4)  
[3] 128
[4] 1
[5] 1
[6] 32*2
[7] 32*2
[8] 2
[9] 2
[10] 172*128
[11] 172*128
[12] 128
[13] 128

Thanks a lot!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant