TransWikia.com

Training a Variational Autoencoder (VAE) for Random Number Generation

Data Science Asked on January 18, 2021

I have a complicated 20-dimensional multi-modal distribution and consider training a VAE to learn an approximation of it using 2000 samples. But particularly, with the aim to subsequently generate pseudo-random numbers behaving according to the structure of the distribution. However, my problems are the following:

  1. Is my approach fundamentally or logically flawed? Specifically, because unlike image data, the random numbers are of geometric nature and thus take negative values and could also be considered noisy.
  2. How do I find the right architecture aside from simple trial and error? Obviously, I do not necessarily need 2D-Convolutions. But instead, 1D-Convolutions could be considered a good choice to capture the correlations (i.e. modalities of the distribution). I’m also not sure about how I properly decide on the number of hidden layers and nodes.

One Answer

You are describing surrogate modelling.

Because your situation is well studied, I recommend looking at what others have published. See, this paper for example.

Answered by Benji Albert on January 18, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP