Artificial Intelligence Asked on November 20, 2021
XOR data, without labels:
[[0,0],[0,1],[1,0],[1,1]]
I’m using this network for auto-classifying XOR data:
H1 <-- Dense(units=2, activation=relu) #any activation here
Z <-- Dense(units=2, activation=softmax) #softmax for 2 classes of XOR result
Out <-- Dense(units=2, activation=sigmoid) #sigmoid to return 2 values in (0,1)
There’s a logical problem in the network, that is, Z represents 2 classes,
however, the 2 classes can’t be decoded back to 4 samples of XOR data.
How to fix the network above to auto-classify XOR data, in unsupervised manner?
How to fix the network above to auto-classify XOR data, in unsupervised manner?
This cannot be done, except accidentally.
Unsupervised learning cannot replace or emulate supervised learning.
As a thought experiment, consider why you would expect the network to discover XOR, when simply considering outputs rounded to binary, you could equally find AND, OR, NAND, NOR or any of the 16 possible mapping functions from input to output. All of the possible maps are equally valid functions, and there is no reason why a discovered function mapping should become any one of them by preference.
Unsupervised learning approaches typically find patterns that optimise some measure across the dataset without using labelled data. Clustering is a classic example, and auto-encoding is sometimes considered unsupervised because there is no separate label (although the term self-supervised is also used, because there is still technically a label used in training, it happens to equal the input).
You cannot use auto-encoding approaches here anyway, because XOR needs to map ${0,1} times {0,1} rightarrow {0,1}$
You could potentially use a loss function based on how close to a 0 or 1 any output is. That should cause the network to converge to one of the 16 possible binary functions, based on random initialisation. For example, you could use $y(1-y)$ as the loss.
Answered by Neil Slater on November 20, 2021
1 Asked on November 24, 2021
applications deep learning deepfakes generative adversarial networks
1 Asked on November 20, 2021
autoencoders deep learning machine learning neural networks unsupervised learning
1 Asked on November 20, 2021
1 Asked on November 17, 2021 by dhanush-giriyan
1 Asked on November 12, 2021
1 Asked on November 10, 2021
long short term memory machine learning open ai reinforcement learning time series
1 Asked on November 7, 2021
2 Asked on November 4, 2021
deep rl dqn neural networks reinforcement learning temporal difference methods
1 Asked on November 4, 2021
dense rewards reinforcement learning reward design reward functions reward shaping
0 Asked on November 4, 2021 by tinu
ai development machine learning papers research state of the art
1 Asked on November 4, 2021 by ijuneja
1 Asked on August 24, 2021 by kashan
1 Asked on August 24, 2021 by ram-bharadwaj
1 Asked on August 24, 2021 by metrician
epsilon greedy policy monte carlo methods notation on policy methods reinforcement learning
1 Asked on August 24, 2021 by user289602
1 Asked on August 24, 2021 by daniel-koh
0 Asked on August 24, 2021 by soitgoes
function approximation markov decision process reinforcement learning
0 Asked on August 24, 2021 by seunosiko
0 Asked on August 24, 2021 by user38639
convolutional neural networks data augmentation neural networks testing training
Get help from others!
Recent Questions
Recent Answers
© 2022 AnswerBun.com. All rights reserved. Sites we Love: PCI Database, MenuIva, UKBizDB, Menu Kuliner, Sharing RPP