TransWikia.com

Can CNNs detect features of different images?

Data Science Asked by Z Oscar on July 16, 2021

In lecture, we talked about “parameter sharing” as a benefit of using convolutional networks. Which of the following statements about parameter sharing in ConvNets are true? (Check all that apply.)

  1. It allows parameters learned for one task to be shared even for a different task (transfer learning).
  2. It reduces the total number of parameters, thus reducing overfitting.
  3. It allows gradient descent to set many of the parameters to zero, thus making the connections sparse.
  4. It allows a feature detector to be used in multiple locations throughout the whole input image/input volume.

Here are the correct answers:

  1. It reduces the total number of parameters, thus reducing overfitting.
  2. It allows a feature detector to be used in multiple locations throughout the whole input image/input volume.

Why isn’t the following answer also correct:

  1. It allows parameters learned for one task to be shared even for a different task (transfer learning).
    Doesn’t ConvNets allow parameters to be shared, detecting the similar features of different images?

2 Answers

The point is that parameters sharing is not the same as reusing some learnt weights in a task for another task (a.k.a transfer learning); it is, transfer learning can be used with convolutional neural networks, but parameters sharing does not mean transferring knowledge from one task to another. For a detailed definition of parameters sharing, look at this documentation from Stanford university , section Parameter Sharing

As explained in that page, you can think of it as a tool for reducing the number of weights while learning in a specific task (so forget about traqnsferring these weights to another task, which could be the case, but not the meaning of this concept).

An intuitive way of reasoning the concept could be: parameter sharing assumption is relatively reasonable: If detecting a horizontal edge is important at some location in the image, it should intuitively be useful at some other location as well due to the translationally-invariant structure of images

Answered by German C M on July 16, 2021

Doesn't ConvNets allow parameters to be shared, detecting the similar features of different images?

Your final statement holds true for convolutional layers which are in early layers, but final layers detect more abstract features, e.g. a full object. Consequently, the last layers of a CNN that is trained with images of flowers will not be helpful for a dataset that is full of cars. Their high-level features differ. In transfer learning, there are different scenarios depending on the size of the datasets and the similarity of the datasets.

By the way, the last layers in conv nets see a wider range, and first layers access to limited regions simultaneously.

Answered by Media on July 16, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP