

This was included in InceptionV1 as far as I’m aware, future versions of InceptionNet do not include auxiliary classifiers. Researchers who conceived the InceptionNet architecture decided to add auxiliary classifiers to intermediary layers of the model to ensure that the model actually learns something useful. MaxPool2d ( kernel_size = 3, padding = 1, stride = 1 ), ConvBlock ( in_channels, out_pool, kernel_size = 1 ), ) def forward ( self, x ): branches = ( self. Sequential ( ConvBlock ( in_channels, red_5x5, kernel_size = 1 ), ConvBlock ( red_5x5, out_5x5, kernel_size = 5, padding = 2 ), ) self. Sequential ( ConvBlock ( in_channels, red_3x3, kernel_size = 1, padding = 0 ), ConvBlock ( red_3x3, out_3x3, kernel_size = 3, padding = 1 ), ) self. branch1 = ConvBlock ( in_channels, out_1x1, kernel_size = 1 ) self. Module ): def _init_ ( self, in_channels, out_1x1, red_3x3, out_3x3, red_5x5, out_5x5, out_pool, ): super ( InceptionBlock, self ).

In the InceptionBlock below, you will see that there are indeed various branches, and that the output from these branches are concatenated to produce a final output in the forward() function.Ĭlass InceptionBlock ( nn.
Torch nn sequential pointer how to#
Specifically, we throw the model three options: one-by-one, three-by-three, and five-by-five kernels, and we let the model figure out how to weigh and process information from these kernels. Instead of engaging in time-consuming hyperparameter tuning, we let the model decide what the optimal kernel size is. The standard, go-to kernel size is three-by-three, but we never know if a five-by-five might be better or worse. The motivating idea behind InceptionNet is that we create multiple convolutional branches, each with different kernel (also referred to as filter) sizes. BatchNorm2d ( out_chanels ) def forward ( self, x ): return F. Conv2d ( in_channels, out_chanels, ** kwargs ) self. Module ): def _init_ ( self, in_channels, out_chanels, ** kwargs ): super ( ConvBlock, self ).
