Index
A
- Actor-Critic methods, Asynchronous Advantage Actor-Critic Agent (A3C)
- Adam optimization, Closing the Loop on MNIST with Convolutional Networks, Building a Convolutional Network for CIFAR-10, Implementing a Part-of-Speech Tagger, Building the Model and Optimizer
- add_episode(), Implementing Experience Replay
- advantage function, Asynchronous Advantage Actor-Critic Agent (A3C)
- AlexNet, The Shortcomings of Feature Selection
- annealed e-Greedy policy, Annealed
ϵ
-Greedy, PGAgent Performance on Pole-Cart
- approximate per-image whitening, Image Preprocessing Pipelines Enable More Robust Models
- arc-standard system, Dependency Parsing and SyntaxNet
- Asynchronous Advantage Actor-Critic (A3C), Asynchronous Advantage Actor-Critic Agent (A3C)
- Atari games, Deep Reinforcement Learning Masters Atari Games
- attention, capturing, Augmenting Recurrent Networks with Attention
- attention_decoder, Dissecting a Neural Translation Network-Dissecting a Neural Translation Network
- audio transciption (see part-of-speech (POS) tagging)
- autoregressive decoding, Dissecting a Neural Translation Network
B
- bAbI dataset
- backpropagation, The Backpropagation Algorithm-The Backpropagation Algorithm, The Challenges with Vanishing Gradients
- batch gradient descent, Stochastic and Minibatch Gradient Descent
- batch normalization, Accelerating Training with Batch Normalization-Building a Convolutional Network for CIFAR-10, Implementing a Sentiment Analysis Model
- batch-major vectors, Dissecting a Neural Translation Network
- batch_weights.append(), Dissecting a Neural Translation Network
- beam search, Beam Search and Global Normalization-Beam Search and Global Normalization
- Bellman Equation, The Bellman Equation
- bit tensor, Long Short-Term Memory (LSTM) Units
- boosting, The Shortcomings of Feature Selection
- Breakout, example with DQN, Playing Breakout wth DQN-Improving and Moving Beyond DQN
- bucketing, Dissecting a Neural Translation Network-Dissecting a Neural Translation Network
- bucket_id, Dissecting a Neural Translation Network, Dissecting a Neural Translation Network, Dissecting a Neural Translation Network
- build_model(), Creating an Agent-Building the Model and Optimizer
- build_training(), Creating an Agent-Building the Model and Optimizer
C
- CartPole environment, Policy Gradient Main Function
- CIFAR-10 challenge, Building a Convolutional Network for CIFAR-10-Building a Convolutional Network for CIFAR-10
- computer vision (see convolutional neural networks)
- context window, Tackling seq2seq with Neural N-Grams
- conv2d(), Closing the Loop on MNIST with Convolutional Networks, Building a Convolutional Network for CIFAR-10, Building a Convolutional Network for CIFAR-10
- convolutional filters, Leveraging Convolutional Filters to Replicate Artistic Styles-Learning Convolutional Filters for Other Problem Domains
- convolutional neural networks (CNNs), Test Sets, Validation Sets, and Overfitting, Neurons in Human Vision-Summary
- architectures, Full Architectural Description of Convolution Networks-Full Architectural Description of Convolution Networks
- batch normalization and, Accelerating Training with Batch Normalization-Building a Convolutional Network for CIFAR-10
- comparison with and without batch normalization, Building a Convolutional Network for CIFAR-10-Building a Convolutional Network for CIFAR-10
- convolutional layer, Full Description of the Convolutional Layer-Full Description of the Convolutional Layer
- creative filters for artistic styles, Leveraging Convolutional Filters to Replicate Artistic Styles-Leveraging Convolutional Filters to Replicate Artistic Styles
- filter hyperparameters, Full Description of the Convolutional Layer
- filters and feature maps, Filters and Feature Maps-Filters and Feature Maps
- image analysis example, Closing the Loop on MNIST with Convolutional Networks-Closing the Loop on MNIST with Convolutional Networks
- image preprocessing, Image Preprocessing Pipelines Enable More Robust Models-Image Preprocessing Pipelines Enable More Robust Models
- learning visualization in, Visualizing Learning in Convolutional Networks-Visualizing Learning in Convolutional Networks
- max pooling layer in, Max Pooling-Max Pooling
- versus vanilla deep neural networks, Vanilla Deep Neural Networks Don’t Scale-Vanilla Deep Neural Networks Don’t Scale
- conv_batch_norm(), Accelerating Training with Batch Normalization
- create_model(), Dissecting a Neural Translation Network-Dissecting a Neural Translation Network
- cross-entropy loss, Beam Search and Global Normalization
- current_step, Dissecting a Neural Translation Network
D
- dataset preprocessing, Implementing a Part-of-Speech Tagger-Dependency Parsing and SyntaxNet
- decode(), Dissecting a Neural Translation Network
- decoder network, Solving seq2seq Tasks with Recurrent Neural Networks
- deep neural networks (DNNs)
- Deep Q-Network (DQN), Deep Reinforcement Learning Masters Atari Games, Deep Q-Network (DQN)-Improving and Moving Beyond DQN
- experience replay, Experience Replay, Implementing Experience Replay
- implementation example, Playing Breakout wth DQN-Improving and Moving Beyond DQN
- learning stability, Learning Stability-Experience Replay
- and Markov Assumption, DQN and the Markov Assumption
- prediction network, Building Our Architecture, Updating Our Target Q-Network
- state history and, DQN’s Solution to the Markov Assumption
- target network, Target Q-Network, Building Our Architecture, Updating Our Target Q-Network
- training, Training DQN
- weaknesses, Improving and Moving Beyond DQN
- Deep Recurrent Q-Networks (DRQN), Deep Recurrent Q-Networks (DRQN)
- deep reinforcement learning (see reinforcement learning (RL))
- DeepMind, Deep Reinforcement Learning Masters Atari Games
- (see also Deep Q-Network (DQN))
- delta rule, The Delta Rule and Learning Rates
- dependency parsing, Dependency Parsing and SyntaxNet, A Case for Stateful Deep Learning Models
- Differentiable Neural Computers (DNCs)
- Differential Neural Computers (DNCs)
- discounted future return, Discounted Future Return
- DQNAgent(), Playing Breakout wth DQN, DQNAgent Results on Breakout
- dropout, Preventing Overfitting in Deep Neural Networks-Preventing Overfitting in Deep Neural Networks, Building a Convolutional Network for CIFAR-10
E
- e-Greedy policy,
ϵ
-Greedy
- embedding_attention_decoder, Dissecting a Neural Translation Network
- embedding_layer(), Implementing a Sentiment Analysis Model
- encoder network, Solving seq2seq Tasks with Recurrent Neural Networks
- end-of-sequence (EOS) token, Solving seq2seq Tasks with Recurrent Neural Networks
- EpisodeHistory(), Keeping Track of History, Implementing Experience Replay
- epochs, Test Sets, Validation Sets, and Overfitting, Dissecting a Neural Translation Network
- error derivative calculations, The Backpropagation Algorithm-The Backpropagation Algorithm
- error surface, Gradient Descent, Stochastic and Minibatch Gradient Descent
- experience replay, Experience Replay, Implementing Experience Replay
- ExperienceReplayTable(), Implementing Experience Replay
- explore-exploit dilemma, Explore Versus Exploit-Policy Versus Value Learning
F
- facial recognition, The Shortcomings of Feature Selection-The Shortcomings of Feature Selection
- feature maps, Filters and Feature Maps-Filters and Feature Maps, Max Pooling
- feature selection, The Shortcomings of Feature Selection-The Shortcomings of Feature Selection
- feed-forward neural networks
- feedforward_pos.py, Implementing a Part-of-Speech Tagger
- feed_previous, Dissecting a Neural Translation Network
- filters, Filters and Feature Maps-Filters and Feature Maps
- filter_summary(), Building a Convolutional Network for CIFAR-10
- forward_only flag, Dissecting a Neural Translation Network
- fractional max pooling, Max Pooling
- future return, Future Return-Future Return
G
- garden path sentences, Beam Search and Global Normalization
- Gated Recurrent Unit (GRU), TensorFlow Primitives for RNN Models
- get_all, Implementing a Part-of-Speech Tagger
- get_batch(), Dissecting a Neural Translation Network, Dissecting a Neural Translation Network, Dissecting a Neural Translation Network, Dissecting a Neural Translation Network
- global normalization, Beam Search and Global Normalization
- Google SyntaxNet, Dependency Parsing and SyntaxNet-Beam Search and Global Normalization
- gradient descent (GD), Gradient Descent-Gradient Descent, Test Sets, Validation Sets, and Overfitting, Dissecting a Neural Translation Network
- gradient, defined, Gradient Descent
- Gram matrix, Leveraging Convolutional Filters to Replicate Artistic Styles
L
- L1 regularization, Preventing Overfitting in Deep Neural Networks
- L2 regularization, Preventing Overfitting in Deep Neural Networks, Preventing Overfitting in Deep Neural Networks
- language translation, Solving seq2seq Tasks with Recurrent Neural Networks-Dissecting a Neural Translation Network
- learning rates, The Delta Rule and Learning Rates
- LevelDB, Implementing a Part-of-Speech Tagger, Implementing a Part-of-Speech Tagger
- leveldb.LevelDB(), Implementing a Part-of-Speech Tagger
- linear neurons, The Fast-Food Problem-The Fast-Food Problem
- local invariance, Max Pooling
- local maximum, Explore Versus Exploit
- local normalization, Beam Search and Global Normalization
- long short-term memory (LSTM) model
- long short-term memory (LSTM) units, Long Short-Term Memory (LSTM) Units-Long Short-Term Memory (LSTM) Units, Dissecting a Neural Translation Network
- loop(), Dissecting a Neural Translation Network
M
- Markov Assumption, DQN and the Markov Assumption
- Markov Decision Process (MDP), Markov Decision Processes (MDP)-Discounted Future Return, DQN and the Markov Assumption
- max norm constraints, Preventing Overfitting in Deep Neural Networks
- max pooling layer, Max Pooling-Max Pooling
- max_pool(), Closing the Loop on MNIST with Convolutional Networks, Closing the Loop on MNIST with Convolutional Networks, Building a Convolutional Network for CIFAR-10
- mean_var_with_update(), Accelerating Training with Batch Normalization
- memory cells, Long Short-Term Memory (LSTM) Units
- memory(), Keeping Track of History
- minibatch gradient descent, Stochastic and Minibatch Gradient Descent
- minibatches, Stochastic and Minibatch Gradient Descent
O
- OpenAI Gym, OpenAI Gym
- output gate, Long Short-Term Memory (LSTM) Units
- output value, Gradient Descent with Sigmoidal Neurons
- output_logits, Dissecting a Neural Translation Network
- output_projection flag, Dissecting a Neural Translation Network
- overfitting, Test Sets, Validation Sets, and Overfitting-Test Sets, Validation Sets, and Overfitting, Preventing Overfitting in Deep Neural Networks-Preventing Overfitting in Deep Neural Networks
P
- padding sequences, Dissecting a Neural Translation Network-Dissecting a Neural Translation Network
- parameter vectors
- determining (see training)
- part-of-speech (POS) tagging, Tackling seq2seq with Neural N-Grams-Implementing a Part-of-Speech Tagger, A Case for Stateful Deep Learning Models
- PGAgent(), Creating an Agent-Building the Model and Optimizer
- pole-balancing, What Is Reinforcement Learning?-Markov Decision Processes (MDP)
- pole-cart, Pole-Cart with Policy Gradients-PGAgent Performance on Pole-Cart
- policies, Policy
- Policy Gradients, Policy Learning via Policy Gradients-PGAgent Performance on Pole-Cart
- policy learning, Policy Versus Value Learning
- POSDataset(), Implementing a Part-of-Speech Tagger
- prediction network, Building Our Architecture, Updating Our Target Q-Network
- predict_action(), Sampling Actions
- previous_losses, Dissecting a Neural Translation Network
R
- recurrent neural networks (RNNs), Recurrent Neural Networks-TensorFlow Primitives for RNN Models
- regularization, Preventing Overfitting in Deep Neural Networks-Preventing Overfitting in Deep Neural Networks
- reinforcement learning (RL), Deep Reinforcement Learning
- Asynchronous Advantage Actor-Critic (A3C), Asynchronous Advantage Actor-Critic Agent (A3C)
- Deep Q-network (DQN) (see Deep Q-network (DQN))
- explore-exploit dilemma, Explore Versus Exploit-Policy Versus Value Learning
- OpenAI Gym and, OpenAI Gym
- overview, What Is Reinforcement Learning?-What Is Reinforcement Learning?
- pole-balancing, What Is Reinforcement Learning?-Markov Decision Processes (MDP)
- pole-cart, Pole-Cart with Policy Gradients-PGAgent Performance on Pole-Cart
- policy learning versus value learning, Policy Versus Value Learning
- Q-learning, Q-Learning and Deep Q-Networks-Deep Recurrent Q-Networks (DRQN)
- UNsupervised REinforcement and Auxiliary Learning (UNREAL), UNsupervised REinforcement and Auxiliary Learning (UNREAL)
- value-learning, Q-Learning and Deep Q-Networks
- reward prediction, UNsupervised REinforcement and Auxiliary Learning (UNREAL)
S
- saddle points, Stochastic and Minibatch Gradient Descent
- sample_batch(), Implementing Experience Replay-DQN Main Loop
- sentiment analysis, Implementing a Sentiment Analysis Model-Implementing a Sentiment Analysis Model
- seq2seq problems (see sequence analysis)
- seq2seq.embedding_attention_seq2seq(), Dissecting a Neural Translation Network, Dissecting a Neural Translation Network
- seq2seq.model_with_buckets, Dissecting a Neural Translation Network
- seq2seq_f(), Dissecting a Neural Translation Network
- seq2seq_model.Seq2SeqModel, Dissecting a Neural Translation Network-Dissecting a Neural Translation Network
- sequence analysis
- beam search and global normalization, Beam Search and Global Normalization-Beam Search and Global Normalization
- dependency parsing, Dependency Parsing and SyntaxNet-Dependency Parsing and SyntaxNet, A Case for Stateful Deep Learning Models
- long short-term memory (LSTM) units, Long Short-Term Memory (LSTM) Units-Long Short-Term Memory (LSTM) Units
- neural translation networks, Dissecting a Neural Translation Network-Dissecting a Neural Translation Network
- overview, Analyzing Variable-Length Inputs
- part-of-speech tagging, Tackling seq2seq with Neural N-Grams-Implementing a Part-of-Speech Tagger, A Case for Stateful Deep Learning Models
- recurrent neural networks and, Solving seq2seq Tasks with Recurrent Neural Networks-Augmenting Recurrent Networks with Attention
- SyntaxNet, Dependency Parsing and SyntaxNet-Beam Search and Global Normalization
- session.run(), Dissecting a Neural Translation Network
- sigmoidal neurons, Gradient Descent with Sigmoidal Neurons-Gradient Descent with Sigmoidal Neurons, Long Short-Term Memory (LSTM) Units
- Skip-Gram model, Solving seq2seq Tasks with Recurrent Neural Networks
- skip-thought vector, Solving seq2seq Tasks with Recurrent Neural Networks
- softmax output layers, Beam Search and Global Normalization
- state history, DQN’s Solution to the Markov Assumption, Stacking Frames
- stateful deep learning models, A Case for Stateful Deep Learning Models-A Case for Stateful Deep Learning Models
- step(), Dissecting a Neural Translation Network, Dissecting a Neural Translation Network, Dissecting a Neural Translation Network-Dissecting a Neural Translation Network
- stochastic gradient descent (SGD), Stochastic and Minibatch Gradient Descent, Policy Learning via Policy Gradients, Training DQN
- SyntaxNet, Dependency Parsing and SyntaxNet-Beam Search and Global Normalization
T
- t-Distributed Stochastic Neighbor Embedding (t-SNE), Visualizing Learning in Convolutional Networks
- tags_to_index dictionary, Implementing a Part-of-Speech Tagger
- target Q-network, Target Q-Network, Building Our Architecture, Updating Our Target Q-Network
- TensorBoard, Implementing a Part-of-Speech Tagger, Implementing a Sentiment Analysis Model
- TensorFlow
- test sets, Test Sets, Validation Sets, and Overfitting-Test Sets, Validation Sets, and Overfitting
- tf.AdamOptimizer, Building the Model and Optimizer
- tf.constant_initializer(), Closing the Loop on MNIST with Convolutional Networks, Accelerating Training with Batch Normalization, Building a Convolutional Network for CIFAR-10
- tf.control_dependencies(), Accelerating Training with Batch Normalization
- tf.get_variable(), Closing the Loop on MNIST with Convolutional Networks, Accelerating Training with Batch Normalization, Building a Convolutional Network for CIFAR-10
- tf.identity(), Accelerating Training with Batch Normalization
- tf.image.per_image_whitening(), Image Preprocessing Pipelines Enable More Robust Models
- tf.image.random_brightness(), Image Preprocessing Pipelines Enable More Robust Models
- tf.image.random_contrast(), Image Preprocessing Pipelines Enable More Robust Models
- tf.image.random_flip_left_right(), Image Preprocessing Pipelines Enable More Robust Models
- tf.image.random_flip_up_down(), Image Preprocessing Pipelines Enable More Robust Models
- tf.image.random_hue(), Image Preprocessing Pipelines Enable More Robust Models
- tf.image.random_saturation(), Image Preprocessing Pipelines Enable More Robust Models
- tf.image.transpose_image(), Image Preprocessing Pipelines Enable More Robust Models
- tf.matmul(), Building a Convolutional Network for CIFAR-10
- tf.nn.batch_norm_with_global_normalization(), Accelerating Training with Batch Normalization
- tf.nn.bias_add(), Building a Convolutional Network for CIFAR-10
- tf.nn.conv2d(), Full Description of the Convolutional Layer, Closing the Loop on MNIST with Convolutional Networks
- tf.nn.dropout(), Closing the Loop on MNIST with Convolutional Networks, Building a Convolutional Network for CIFAR-10
- tf.nn.max_pool, Closing the Loop on MNIST with Convolutional Networks
- tf.nn.moments(), Accelerating Training with Batch Normalization
- tf.nn.relu(), Closing the Loop on MNIST with Convolutional Networks, Building a Convolutional Network for CIFAR-10
- tf.nn.rnn_cell.BasicLSTMCell(), TensorFlow Primitives for RNN Models
- tf.nn.rnn_cell.BasicRNNCell(), TensorFlow Primitives for RNN Models
- tf.nn.rnn_cell.GRUCell(), TensorFlow Primitives for RNN Models
- tf.nn.rnn_cell.LSTMCell(), TensorFlow Primitives for RNN Models
- tf.random_crop(), Image Preprocessing Pipelines Enable More Robust Models
- tf.random_normal_initializer(), Closing the Loop on MNIST with Convolutional Networks, Building a Convolutional Network for CIFAR-10
- tf.reshape(), Closing the Loop on MNIST with Convolutional Networks, Building a Convolutional Network for CIFAR-10
- tf.RNNCell
- tf.slice(), Implementing a Sentiment Analysis Model
- tf.squeeze(), Implementing a Sentiment Analysis Model
- tf.train.ExponentialMovingAverage(), Accelerating Training with Batch Normalization
- tf.variable_scope(), Closing the Loop on MNIST with Convolutional Networks, Building a Convolutional Network for CIFAR-10
- tflearn, Implementing a Sentiment Analysis Model-Implementing a Sentiment Analysis Model
- tokenization, Dissecting a Neural Translation Network
- training neural networks, The Fast-Food Problem-Summary
- backpropagation, The Backpropagation Algorithm
- batch gradient descent, Stochastic and Minibatch Gradient Descent
- batch normalization and, Accelerating Training with Batch Normalization-Accelerating Training with Batch Normalization
- gradient descent (GD), Gradient Descent-Gradient Descent, Gradient Descent with Sigmoidal Neurons-Gradient Descent with Sigmoidal Neurons
- minibatch gradient descent, Stochastic and Minibatch Gradient Descent
- overfitting, Test Sets, Validation Sets, and Overfitting-Test Sets, Validation Sets, and Overfitting, Preventing Overfitting in Deep Neural Networks-Preventing Overfitting in Deep Neural Networks
- stochastic gradient descent (SGD), Stochastic and Minibatch Gradient Descent
- test sets, Test Sets, Validation Sets, and Overfitting-Test Sets, Validation Sets, and Overfitting
- validation sets, Test Sets, Validation Sets, and Overfitting-Test Sets, Validation Sets, and Overfitting
- training sets, Test Sets, Validation Sets, and Overfitting-Test Sets, Validation Sets, and Overfitting
V
- validation sets, Test Sets, Validation Sets, and Overfitting-Test Sets, Validation Sets, and Overfitting
- value iteration, The Bellman Equation
- value learning, Policy Versus Value Learning, Q-Learning and Deep Q-Networks
- vanishing gradients, The Challenges with Vanishing Gradients-Long Short-Term Memory (LSTM) Units
- variable-length inputs, analyzing, Analyzing Variable-Length Inputs-Analyzing Variable-Length Inputs
..................Content has been hidden....................
You can't read the all page of ebook, please click
here login for view all page.