TransWikia.com

Why is my Soft Actor-Critic's policy and value function losses not converging?

Artificial Intelligence Asked by Zahra on December 7, 2020

I’m trying to implement a soft actor-critic algorithm for financial data (stock prices), but I have trouble with losses: no matter what combination of hyper-parameters I enter, they are not converging, and basically it caused bad reward return as well. It sounds like the agent is not learning at all.

I already tried to tune some hyperparameters (learning rate for each network + number of hidden layers), but I always get similar results.
The two plots below represent the losses of my policy and one of the value functions during the last episode of training.

enter image description here

enter image description here

My question is, would it be related to the data itself (nature of data) or is it something related to the logic of the code?

One Answer

I would say it is the nature of data. Generally speaking, you are trying to predict a random sequence, especially if you use the history data as an input and try to get the future value as an output.

Correct answer by oleg.mosalov on December 7, 2020

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP