Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

hybrid of max pooling and average pooling

While tweaking a deep convolutional net using Keras (with the TensorFlow backend) I would like to try out a hybrid between MaxPooling2D and AveragePooling2D, because both strategies seem to improve two different aspects regarding my objective.

I'm thinking about something like this:

    -------
    |8 | 1|
x = ---+---
    |1 | 6|
    -------

average_pooling(x)                ->   4
max_pooling(x)                    ->   8
hybrid_pooling(x, alpha_max=0.0)  ->   4
hybrid_pooling(x, alpha_max=0.25) ->   5
hybrid_pooling(x, alpha_max=0.5)  ->   6
hybrid_pooling(x, alpha_max=0.75) ->   7
hybrid_pooling(x, alpha_max=1.0)  ->   8

Or as an equation:

hybrid_pooling(x, alpha_max) =
    alpha_max * max_pooling(x) + (1 - alpha_max) * average_pooling(x)

Since it looks like such a thing is not provided off the shelf, how can it be implemented in an efficient way?

like image 341
Tobias Hermann Avatar asked Aug 08 '17 08:08

Tobias Hermann


2 Answers

I now use a different solution for combining both pooling variations.

  • give the tensor to both pooling functions
  • concatenate the results
  • use a small conv layer to learn how to combine

This approach, of course, has a higher computational cost but is also more flexible. The conv layer after the concatenation can learn to simply blend the two pooling results with an alpha, but it can also end up using different alphas for different features and of course - as conv layers do - combine the pooled features in a completely new way.

The code (Keras functional API) looks as follows:

import numpy as np
from tensorflow.keras.layers import Input, MaxPooling2D, Conv2D
from tensorflow.keras.layers import Concatenate, AveragePooling2D
from tensorflow.keras.models import Model

# implementation of the described custom pooling layer
def hybrid_pool_layer(pool_size=(2,2)):
    def apply(x):
        return Conv2D(int(x.shape[-1]), (1, 1))(
            Concatenate()([
                MaxPooling2D(pool_size)(x),
                AveragePooling2D(pool_size)(x)]))
    return apply

# usage example
inputs = Input(shape=(256, 256, 3))
x = inputs
x = Conv2D(8, (3, 3))(x)
x = hybrid_pool_layer((2,2))(x)
model = Model(inputs=inputs, outputs=x)
model.compile(loss='categorical_crossentropy', optimizer='nadam')

Surely one could also leave out the Conv2D and just return the concatenation of the two poolings and let the next layer do the merging work. But the implementation above makes sure that tensor resulting from this hybrid pooling has the shape one would also expect from a normal single pooling operation.

like image 170
Tobias Hermann Avatar answered Oct 08 '22 02:10

Tobias Hermann


here an easy implementation of alpha * average_pooling(x) + (1 - alpha) * max_pooling(x) to put inside the network...

x = Conv2D(32, 3, activation='relu')(...)
a = AveragePooling2D()(x)
a = Lambda(lambda xx : xx*alpha)(a)
m = MaxPooling2D()(x)
m = Lambda(lambda xx : xx*(1-alpha))(m)
x = Add()([a,m])

with alpha = 0.xx in the range [0,1]

like image 31
Marco Cerliani Avatar answered Oct 08 '22 03:10

Marco Cerliani