# Why are Target Networks used in Deep Q-Learning as opposed to the Expected Value equation?

Artificial Intelligence Asked by TMT on September 10, 2020

I understand we use a target network because it helps resolve issues regarding stability, however, that’s not what I’m here to ask.

What I would like to understand is why a target network is used as a measure of ground truth as opposed to the expectation equation.

To clarify, here is what I mean. This is the process used for DQN:

1. In DQN, we begin with a state $$S$$
2. We then pass this state through a neural network which outputs Q values for each action in the action space
3. A policy e.g. epsilon-greedy is used to take an action
4. This subsequently produces the next state $$S_{t+1}$$
5. $$S_{t+1}$$ is then passed through a target neural network to produce target Q values
6. These target Q values are then injected into the Bellman equation which ultimately produces a target Q value via the Q-learning update rule equation
7. MSE is used on 6 and 2 to compute the loss
8. This is then back-propagated to update the parameters for the neural network in 2
9. The target neural network has its parameters updated every X epochs to match the parameters in 2

Why do we use a target neural network to output Q values instead of using statistics. Statistics seems like a more accurate way to represent this. By statistics, I mean this:

Q values are the expected return, given the state and action under policy π.

$$Q(S_{t+1},a) = V^π(S_{t+1})$$ = $$mathbb{E}(r_{t+1}+ γr_{t+2}+ (γ^2)_{t+3} + … mid S_{t+1}) = {E}(∑γ^kr_{t+k+1}mid S_{t+1})$$

We can then take the above and inject it into the Bellman equation to update our target Q value:

$$Q(S_{t},a_t) + α*(r_t+γ*max(Q(S_{t+1},a))-Q(S_{t},a))$$

So, why don’t we set the target to the sum of diminishing returns? Surely a target network is very inaccurate, especially since the parameters in the first few epochs for the target network are completely random.

Why are Target Networks used in Deep Q-Learning as opposed to the Expected Value equation?

In short, because for many problems, this learns more efficiently.

It is the difference between Monte Carlo (MC) methods and Temporal Difference (TD) learning.

You can use MC estimates for expected return in deep RL. They are slower for two reasons:

• It takes far more experience to collect enough data to train a neural network, because to fully sample a return you need a whole episode. You cannot just use one episode at a time because that presents the neural network with correlated data. You would need to collect multiple episodes and fill a large experience table.

• As an aside, you would also need to discard all the experience after each update, because sampled full returns are on-policy data. Or you could implement importance sampling for off-policy Monte Carlo control and re-calculate the correct updates when the policy starts to improve, which is added complexity.
• Samples of full returns have a higher variance, so the sampled data is noisier.

In comparison, TD learning starts with biased samples. This bias reduces over time as estimates become better, but it is the reason why a target network is used (otherwise the bias would cause runaway feed back).

So you have a bias/variance trade off with TD representing high bias and MC representing high variance.

It is not clear theoretically which is better in general, because it depends on the nature of MDPs that you are solving with each method. In practice, on the types of problems Deep RL has been tried on, single-step TD learning appears to do better than MC sampling of returns, in terms of goals such as sample efficiency and learning time.

You can compromise between TD and MC using eligibility traces, resulting in TD($$lambda$$). However, this is awkward to implement in Deep RL due to the experience replay table. A simpler compromise is to use $$n$$-step returns e.g. $$r_{t+1} + gamma r_{t+2} + gamma^2 r_{t+3} + gamma^3 text{max}_a(Q(s_{t+4},a))$$, which was one of the refinements used in the "Rainbow" DQN paper - note that even though strictly in their case, this handled off-policy incorrectly (it should use importance sampling, but they didn't bother), it still worked well enough for low $$n$$ on the Atari problems.

Answered by Neil Slater on September 10, 2020

## Related Questions

### Is A* with an admissible but inconsistent heuristic optimal?

1  Asked on August 24, 2021 by harry-stuart

### Is there a way to get landmark features automatically learned by a neural network?

1  Asked on August 24, 2021 by user784446

1  Asked on August 24, 2021 by nathan-b

### NEAT can’t solve XOR completely

0  Asked on August 24, 2021 by creepsy

### Why is GPT-3 such a game changer?

1  Asked on August 24, 2021 by parzival

### How can the FCNN reduce the dimensions of the input from $1048 times 100$ to $523 times 100$ with max-pooling?

0  Asked on August 24, 2021

### Are there examples of agents that use a more modest number of parameters on Pendulum (or similar environments)?

1  Asked on August 24, 2021

### How can one be sure that a particular neural network architecture would work?

0  Asked on August 24, 2021 by naveen-reddy-marthala

### How do we make our outputs to have the same size as the true mask?

1  Asked on August 24, 2021 by ravi-teja

### Is it common to have extreme policy’s probabilities?

1  Asked on August 24, 2021 by curiouscat22

### How AlphaGo Zero is learning from $pi_t$ when $z_t = -1$?

1  Asked on August 24, 2021

### Can we use imitation learning for on-policy algorithms?

0  Asked on February 27, 2021 by khush-agrawal

### Why am I getting a difference between training accuracy and accuracy calculated with Keras’ predict_classes on a subset of the training data?

1  Asked on February 23, 2021 by saha

### Can transformer be better than RNN for online speech recognition?

1  Asked on February 23, 2021

### In the case of invalid actions, which output probability matrix should we use in back-propagation?

1  Asked on February 20, 2021 by guineu

### Multi class text classification when having only one sample for classes

1  Asked on February 19, 2021 by fara

### How to identify segment/object that is anomaly using computer vision

0  Asked on February 14, 2021 by tyler-h

### What sort of game problems can neural networks trained/evolved with evolutionary algorithms solve, and how are they typically implemented?

3  Asked on February 13, 2021 by neomerarcana

### How are IOUs for ground truth boxes in YOLO calculated?

1  Asked on February 12, 2021 by nivter

### How to design a good evaluation function for a go-like game?

1  Asked on February 11, 2021 by nae