Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

New posts in deep-learning

LSTM Autoencoder

Why input is scaled in tf.nn.dropout in tensorflow?

What is the difference between an Embedding Layer and a Dense Layer?

How to calculate prediction uncertainty using Keras?

TensorFlow ValueError: Cannot feed value of shape (64, 64, 3) for Tensor u'Placeholder:0', which has shape '(?, 64, 64, 3)'

Keras - stateful vs stateless LSTMs

What is the use of train_on_batch() in keras?

What is the difference between Keras model.evaluate() and model.predict()?

How to get the dimensions of a tensor (in TensorFlow) at graph construction time?

PyTorch memory model: "torch.from_numpy()" vs "torch.Tensor()"

Evaluating pytorch models: `with torch.no_grad` vs `model.eval()`

What does Keras.io.preprocessing.sequence.pad_sequences do?

python deep-learning keras

Saving best model in keras

What is a batch in TensorFlow?

How to avoid "CUDA out of memory" in PyTorch

How to get mini-batches in pytorch in a clean and efficient way?

What is the purpose of tf.global_variables_initializer?

tensorflow deep-learning

How to use return_sequences option and TimeDistributed layer in Keras?

CBOW v.s. skip-gram: why invert context and target words?

Why do we need to explicitly call zero_grad()? [duplicate]