AnswerBun.com

Strategy for improving performance of 3D convolutional GAN

Data Science Asked by BBirdsell on December 22, 2020


Others working with neural nets and GAN’s might find this question interesting.

Background:

I’ve been working with data from Berkeleys PEER Ground Motion Database to generate new novel seismic traces. (Real traces are rendered above.) Coming from a background in engineering my first attempt involved decomposing the traces into their {X,Y,Z} components, however the results were less than satisfying. The repeated mode collapse output can be seen below. There might be ways to fix this with more time and resources but I thought I would try another approach.


Forward:

I still have a bit of time to work on the data, and was looking to solicit methods for turning this {X,Y,Z} point data into something more digestible by a 3D convolutional network. I have given a small sample of one of the traces of 3 features each over time. Each trace is scaled {-1,1) across all axis and interpolated to 4000 steps. (Nearly all of the 788 traces in the training data are over 4000 steps and were down sampled.)

{{0.095746,-0.301555,-0.407207},{0.0955857,-0.301693,-0.407324},{0.0953887,-0.301861,-0.407461},{0.095148,-0.302067,-0.407609},{0.0948748,-0.302296,-0.407768},{0.0945636,-0.302547,-0.407985},{0.0942212,-0.3028,-0.40827},{0.0938457,-0.30301,-0.408664},{0.0934454,-0.303154,-0.409237},{0.0929839,-0.303259,-0.410011},{0.0924333,-0.303451,-0.410692},{0.0917118,-0.3039,-0.41102},{0.0907978,-0.304671,-0.411107},{0.0897068,-0.306049,-0.410927},{0.0886644,-0.308036,-0.410422},{0.0878034,-0.310481,-0.409685},{0.087091,-0.313055,-0.409298},{0.0863677,-0.315387,-0.409476},{0.0854884,-0.317162,-0.409922},{0.0845426,-0.31811,-0.410777}}

My knowledge of the subject matter suggests I need to transform this data into some type of array with places a 1 if there is a trace point there and 0 if void. Is that correct?

Before jumping in and creating another big branch of code, I wanted to confirm some of my assumptions moving forward. It seems like a rather computationally expensive method to divide these real valued 2x2x2 volumes and essentially count how many points are in each volume to recreate the geometry. Are there any precedence for this that I can follow? Is there a way to do this directly with a 3D convolution?

Add your own answers!

Related Questions

Trained BERT models perform unpredictably on test set

1  Asked on April 1, 2021 by peterpaul

     

Calculation of PCA

0  Asked on March 31, 2021

 

Spacy Text classification (Binary Classification)

1  Asked on March 30, 2021 by krishna-rao-gadde

       

How often to call DQN Replay memory?

0  Asked on March 30, 2021 by muhammad-hammad-saghir

   

KMeans clusterization on documents

2  Asked on March 30, 2021

   

Ask a Question

Get help from others!

© 2022 AnswerBun.com. All rights reserved. Sites we Love: PCI Database, MenuIva, UKBizDB, Menu Kuliner, Sharing RPP