:mod:`blueoil.networks.segmentation.lm_bisenet` =============================================== .. py:module:: blueoil.networks.segmentation.lm_bisenet Module Contents --------------- Classes ~~~~~~~ .. autoapisummary:: blueoil.networks.segmentation.lm_bisenet.LMBiSeNet blueoil.networks.segmentation.lm_bisenet.LMBiSeNetQuantize .. py:class:: LMBiSeNet(weight_decay_rate=0.0, auxiliary_loss_weight=0.5, use_feature_fusion=True, use_attention_refinement=True, use_tail_gap=True, *args, **kwargs) Bases: :class:`blueoil.networks.segmentation.base.Base` LM original semantic segmentation network reference to [BiSeNet](https://arxiv.org/abs/1808.00897) Major difference from BiSeNet: * Apply first convolution then branch into contextual and spatial path. * Use space to depth (s2d) and depth to space (d2s) as upsample and downsample. * Use only 1 stride 1x1 and 3x3 convolution (not use multi stride and dilated convolution) for be limited by our convolution IP. * Use DensNet block in contextual part. * All of convolution out channels are less than BiSeNet for inference time. * Use attention refinement module reference to BiSeNet after last layer of 1/32 (BiSeNet: both 1/16 and 1/32) in context path. * Use relu activation instead of sigmoid in attention refinement and feature fusion module. * In up-sampling followed by context path, alternate d2s and 1x1 conv for reducing channel size. .. method:: _space_to_depth(self, name, inputs=None, block_size=2) .. method:: _depth_to_space(self, name, inputs=None, block_size=2) .. method:: _batch_norm(self, inputs, training) .. method:: _conv_bias(self, name, inputs, filters, kernel_size) .. method:: _spatial(self, x) .. method:: _context(self, x) .. method:: _attention(self, name, x) .. method:: _fusion(self, sp, cx) Feature fusion module .. method:: base(self, images, is_training, *args, **kwargs) Base function contains inference. :param images: Input images. :param is_training: A flag for if is training. :returns: Inference result. :rtype: tf.Tensor .. method:: _cross_entropy(self, x, labels) .. method:: _weight_decay_loss(self) L2 weight decay (regularization) loss. .. method:: loss(self, output, labels) Loss. :param output: tensor from inference. :param labels: labels tensor. .. method:: summary(self, output, labels=None) Summary. :param output: tensor from inference. :param labels: labels tensor. .. method:: metrics(self, output, labels) Metrics. :param output: tensor from inference. :param labels: labels tensor. .. method:: post_process(self, output) .. py:class:: LMBiSeNetQuantize(activation_quantizer=None, activation_quantizer_kwargs={}, weight_quantizer=None, weight_quantizer_kwargs={}, *args, **kwargs) Bases: :class:`blueoil.networks.segmentation.lm_bisenet.LMBiSeNet` Following `args` are used for inference: ``activation_quantizer``, ``activation_quantizer_kwargs``, ``weight_quantizer``, ``weight_quantizer_kwargs``. :param activation_quantizer: Weight quantizer. See more at `blueoil.quantizations`. :type activation_quantizer: callable :param activation_quantizer_kwargs: Kwargs for `activation_quantizer`. :type activation_quantizer_kwargs: dict :param weight_quantizer: Activation quantizer. See more at `blueoil.quantizations`. :type weight_quantizer: callable :param weight_quantizer_kwargs: Kwargs for `weight_quantizer`. :type weight_quantizer_kwargs: dict .. method:: _quantized_variable_getter(getter, name, weight_quantization=None, quantize_first_convolution=False, *args, **kwargs) :staticmethod: Get the quantized variables. Use if to choose or skip the target should be quantized. :param getter: Default from tensorflow. :param name: Default from tensorflow. :param weight_quantization: Callable object which quantize variable. :param args: Args. :param kwargs: Kwargs. .. method:: base(self, images, is_training) Base function contains inference. :param images: Input images. :param is_training: A flag for if is training. :returns: Inference result. :rtype: tf.Tensor