This article mainly introduces two practical concepts in transfer learning: pretraining and finetuning.

Supposing that you have the experience of developping a CNN using pytorch.

For example in function forward, you write

1
2
3
4
5
h = self.conv1(x)
r1 = self.pool1(h)

h = self.conv2(r1)
r2 = self.pool2(h)

Pretraining means that the 1st layer like conv1 and pool1 is done for you by the pretrained model. And the parameters of them are fixed. What you need to do is just finetuning, i.e. you input the model with x as input but the r1 is fixed so your truly take r1 as input and train the later layers’ parameters.

Usually, the pretrained model is in large scale. To get the inside layers of this network, we use some functions like mx.symbol.FullyConnected to get the fine tuned data.

作为一名Linux爱好者,久慕Arch Linux系统发行版的盛名。于是在某一天夜里,将自己的电脑格式化装上了Arch Linux系统,从此一发不可收拾。

阅读更多

今天脑子一热把双系统卸载了,并且重置了Win10系统。没想到还出了一些麻烦。

阅读更多

Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×