TransWikia.com

Forcing a neural network to be close to a previous model - Regularization through given model

Artificial Intelligence Asked by BLBA on December 9, 2021

I’m wondering, has anyone seen any paper where one trains a network but biases it to produce similar outputs to a given model (such as one given from expert opinion or it being a previously trained network).

Formally, I’m looking for a paper doing the following:

Let $g:mathbb{R}^drightarrow mathbb{R}^D$ be a model (not necessarily, but possibly, a neural network) trained on some input/output data pairs ${(x_n,y_n)}_{n=1}^N$ and train a neural network $f_{theta}(cdot)$ on
$$
underset{theta}{operatorname{argmin}}sum_{n=1}^N left|
f_{theta}(x_n) – y_n
right| + lambda left|
f_{theta}(x_n) – g(x_n)
right|,
$$

where $theta$ represents all the trainable weight and bias parameters of the network $f_{theta}(cdot)$.

So put another way…$f_{theta}(cdot)$ is being regularized by the outputs of another model…

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP