# 8.1.1.2.1.1.1.7. blueoil.converter.core.optimizer¶

Module of optimization passes.

## 8.1.1.2.1.1.1.7.1. Module Contents¶

### 8.1.1.2.1.1.1.7.1.1. Functions¶

 pass_remove_identities(graph: Graph) → None Removes those nodes of a Graph that satisfies the condition node.op_type() == Identity. pass_transpose(graph: Graph) → None Changes the data order of every node to be NHWC. pass_constant_folding(graph: Graph) → None Given a node N, if the value of each input of N is known at compilation time then N will be executed. pass_propagate_quantization_details_into_conv(graph: Graph) → None Given a node N, it will propagate information about quantization into the convolution nodes. pass_compute_thresholds(graph: Graph) → None Given a Quantizer node Q: pass_pack_weights(graph: Graph) → None Given a Quantized convolution node C, it will pack the weights of C into 32 bit words. pass_quantize_convolutions(graph: Graph) → None Given a convolution node C, if C has proper quantization details, it will mark C as quantized and it will pass_propagate_datatypes(graph) → None Further propagate output data types. pass_propagate_format(graph) → None Further propagate output data types. pass_insert_cast(graph: Graph) → None Insert Cast Operator if needed pass_lookup(graph: Graph, config: Config) → None Lookup. pass_simplify_batchnorm(graph: Graph) → None Simplify BarchNorm operator.
blueoil.converter.core.optimizer.pass_remove_identities(graph: Graph) → None

Removes those nodes of a Graph that satisfies the condition node.op_type() == Identity.

Parameters

graph (Graph) – The input graph. It will be modified in-place.

blueoil.converter.core.optimizer.pass_transpose(graph: Graph) → None
Changes the data order of every node to be NHWC.

The fastest changing dimension is C N stands for batch size (on inference we assume is 1. H and W are the height and width respectively. C stands for channels)

Parameters

graph (Graph) – The input graph. It will be modified in-place.

blueoil.converter.core.optimizer.pass_constant_folding(graph: Graph) → None
Given a node N, if the value of each input of N is known at compilation time then N will be executed.

The node N and its inputs will be replaced with a Constant node which holds the computed output of N.

Parameters
• graph (Graph) – The input graph. It will be modified in-place.

• processed_nodes (list) – The list of the processed nodes so far.

blueoil.converter.core.optimizer.pass_propagate_quantization_details_into_conv(graph: Graph) → None

Given a node N, it will propagate information about quantization into the convolution nodes.

There are two types of nodes. Those which preserve quantization (for example, Space2Depth because does not affect the actual values of the input data, only changes it positions) and those which destroy quantization (for example, BatchNormalization, because it involves float operations).

If there is path in the Graph which connect a Quantizer node Q to a Conv node C and every node between Q and C preserve quantization (for example, Q -> Space2Depth -> Concat > Conv) then the details about the quantizer Q should be propagated into the convolution node C.

This pass allows us to further process the convolution nodes later and maybe quantize these convolutions based on these quantization details. Note that a convolution node has two inputs, input data and weights. We propagate quantization details through both the input node branch and the weight node branch.

Parameters

graph (Graph) – The input graph. It will be modified in-place.

blueoil.converter.core.optimizer.pass_compute_thresholds(graph: Graph) → None
Given a Quantizer node Q:
• if there is a backward path between Q and a convolution node and,

• every node N of that path satisfies the condition N.is_monotonic and,

• the convolution node C (the end of this path) is a quantized convolution

then this pass construct an LUT per channel which maps a possible output value of the quantized convolution node C to the corresponding output of the quantization node Q. This effectively compress the path C -> … -> Q into a list of LUTs that can be used during inference.

Parameters

graph (Graph) – The input graph. It will be modified in-place.

blueoil.converter.core.optimizer.pass_pack_weights(graph: Graph) → None
Given a Quantized convolution node C, it will pack the weights of C into 32 bit words.

If the node Q that apply quantization to the weights of C quantizes, for example, into 1 bit values then one 32 bit word will contain 32 weights.

Parameters

graph (Graph) – The input graph. It will be modified in-place.

blueoil.converter.core.optimizer.pass_quantize_convolutions(graph: Graph) → None
Given a convolution node C, if C has proper quantization details, it will mark C as quantized and it will

assign the correct output data types to the node C and its quantizers. Note that the expected output data type on the runtime is defined as QUANTIZED_NOT_PACKED.

Parameters

graph (Graph) – The input graph. It will be modified in-place.

blueoil.converter.core.optimizer.pass_propagate_datatypes(graph) → None

Further propagate output data types.

Parameters

graph (Graph) – The input graph. It will be modified in-place.

blueoil.converter.core.optimizer.pass_propagate_format(graph) → None

Further propagate output data types.

Parameters

graph (Graph) – The input graph. It will be modified in-place.

blueoil.converter.core.optimizer.pass_insert_cast(graph: Graph) → None

Insert Cast Operator if needed

Parameters

graph (Graph) – The input graph. It will be modified in-place.

blueoil.converter.core.optimizer.pass_lookup(graph: Graph, config: Config) → None

Lookup.

Parameters

graph (Graph) – The input graph. It will be modified in-place.

blueoil.converter.core.optimizer.pass_simplify_batchnorm(graph: Graph) → None

Simplify BarchNorm operator.