5.11. Supported Ops¶
5.11.1. Ops with Limitations¶
5.11.1.1. Base Limitations¶
Output
Requires each output channel size <=
1024
.
5.11.1.2. Blueoil Customized Ops¶
BinaryChannelWiseMeanScalingQuantizer: Quantization operator using binary channel wise scaling.
BinaryMeanScalingQuantizer: Quantization operator using binary scaling.
Input tensor must have float values.
LinearMidTreadHalfQuantizer: Quantization operator with ‘linear mid tread half’.
5.11.1.3. Tensorflow Ops with Limitations¶
-
Currently, support only
2D
.Do not support
kernel depth != 1
.
-
Do not support concat of mixed data types (e.g., quantized values and float values).
If inputs are quantized, requires
Each input channel size = multiple of 32
.
-
Support only convolution
2D
.Do not support transpose.
Requires
kernel size = 1x1
or3x3
or5x5
.Accelerator is not supported
kernel size = 5x5
(CPU supported only).
Requires
Input channel size = multiple of 32
, otherwise zero padding is used.If output is quantized by later operations,
Output channel size = multiple of 32
, otherwise output channel size is free from limitation (but performance will be worse).
-
Requires
depth of input = multiple of block_size^2 * 32
.
-
scale
,offset
,mean
,variance
, andepsilon
must be constants or computable from constants.
-
Do not support
scalar
.
-
Currently, support only
2D
.Do not support
kernel depth != 1
.
-
Supports only
channel-wise paddings
.
-
For quantized tensor, requires
output depth = (multiple of block_size^2 * 32)
or(block_size^2 * {8, 16})
.
-
For quantized tensor, requires
number of channel of each output tensor = multiple of 32
.
5.11.1.4. Tensorflow Ops without Limitations¶
5.11.2. Data Types¶
Floating point
tf.float32: 32-bit single-precision floating-point.