TransWikia.com

Using a part of a trained model in a custom loss function -Tensorflow

Data Science Asked by theCNN on July 18, 2021

I want to write a custom loss function that uses the intermediate result of a trained discriminator.
the loss function compares images.
the loss function is for recovering the latent vector of an image from a gan.
im relatively new to this.

visual representation

im using a reference code to test it out.
https://github.com/utkd/gans/blob/master/cifar10dcgan.ipynb
for full reference code im using

below is an example
https://m.youtube.com/watch?v=dCKbRCUyop8
Watch at 17:30

below is the discriminator code

def get_discriminator(input_layer):
  '''
  Requires the input layer as input, outputs the model and the final layer
  '''

  hid = Conv2D(128, kernel_size=3, strides=1, padding='same')(input_layer)
  hid = BatchNormalization(momentum=0.9)(hid)
  hid = LeakyReLU(alpha=0.1)(hid)

  hid = Conv2D(128, kernel_size=4, strides=2, padding='same')(hid)
  hid = BatchNormalization(momentum=0.9)(hid)
  hid = LeakyReLU(alpha=0.1)(hid)

  hid = Conv2D(128, kernel_size=4, strides=2, padding='same')(hid)
  hid = BatchNormalization(momentum=0.9)(hid)
  hid = LeakyReLU(alpha=0.1)(hid)

for my loss function i want to use the intermediate result from the layer above

  hid = Conv2D(128, kernel_size=4, strides=2, padding='same')(hid)
  hid = BatchNormalization(momentum=0.9)(hid)
  hid = LeakyReLU(alpha=0.1)(hid)

  hid = Flatten()(hid)
  hid = Dropout(0.4)(hid)
  out = Dense(1, activation='sigmoid')(hid)

  model = Model(input_layer, out)

  model.summary()

  return model, out

below is the code im planning to use

zp = tf.Variable(np.random.normal(size=(1,l_size)), dtype=tf.float32)

start_img = Image.open(folder + "foo_00.png")
start_img.resize((img_x, img_y), Image.ANTIALIAS)
start_img_np = np.array(start_img)/255

fz = tf.Variable(start_img_np, tf.float32)
fz = tf.expand_dims(fz, 0)
fz = tf.cast(fz,tf.float32)
# variable 'generator' = trained model that is loaded.

# Define the optimization problem
fzp = generator(zp)
loss = tf.losses.mean_squared_error(labels=fz, predictions=fzp)

here is where i want it to go something like

fzpD= discriminator_intermediate(fpz)
fzD= discriminator_intermediate(fz)
loss = tf.losses.mean_squared_error(labels=fzD, predictions=fzpD)
```

2 Answers

Model class allows you define multiple outputs: official tensorflow (keras) documentation.

...
hid = LeakyReLU(alpha=0.1)(hid)  # the layer you want to use
intermediate = hid
...
model = Model(input_layer, outputs=[out, intermediate])

Then, if you train using model.fit or model.fit_generator, you simply need to provide the labels as a tuple of (expected_output, expected_intermediate_layer).

Answered by Mark Loyman on July 18, 2021

the soultion is simple , just pass X,Y into thse individual layers and operate as normal

here is an example

class mymodel(Model):
    def __init__(self,chandim=-1):
        #just an example
        super(mymodel, self).__init__()
        self.gdn1 = Dense(128 * 16 * 16, activation='relu')
        self.gbn1 = BatchNormalization(momentum=0.9)
        self.glr1 = LeakyReLU(alpha=0.1)
        self.grs1 = Reshape((16, 16, 128))

        self.gcn2 = Conv2D(128, kernel_size=5, strides=1,padding='same')
        self.gbn2 = BatchNormalization(momentum=0.9)   
        #self.gdp2 = Dropout(0.5)
        self.glr2 = LeakyReLU(alpha=0.1)
        
        self.gcn3 = Conv2DTranspose(128, 4, strides=2, padding='same')
        self.gbn3 = BatchNormalization(momentum=0.9)
        self.glr3 = LeakyReLU(alpha=0.1)
    
    def get_model1(self,input_layer):
        hid = self.gdn1(input_layer)    
        hid = self.gbn1(hid)
        hid = self.glr1(hid)
        hid = self.grs1(hid)
        
        
        hid = self.gcn2(hid)    
        hid = self.gbn2(hid)
        hid = self.glr2(hid)
        out = Activation("tanh")(hid)
        

        model = Model(input_layer, out)
        model.summary()
        return model, out
    
    def get_model2(self,input_layer):
        hid = self.gcn3(input_layer)    
        hid = self.gbn3(hid)
        hid = self.glr3(hid)
        
        out = Activation("tanh")(hid)
        

        model = Model(input_layer, out)
        model.summary()
        return model
#Loss Function ----------------------------
    
    def lossFn_model2(self,X,Y)
        bx0 = self.gdn1(X)    
        bx1 = self.gbn1(hid)
        bx2 = self.glr1(hid)
        bx3 = self.grs1(hid)
        #Note Shared Layers 
        by0 = self.gdn1(Y)    
        by1 = self.gbn1(hid)
        by2 = self.glr1(hid)
        by3 = self.grs1(hid)
        
        return tf.math.square(bx3-by3)
```

Answered by theCNN on July 18, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP