tvm.relay.backend

Backend codege modules for relay.

The Python interface to the Relay reference interpreter.

class tvm.relay.backend.interpreter.Closure

A closure produced by the interpreter.

class tvm.relay.backend.interpreter.ConstructorValue(constructor, fields, types)
class tvm.relay.backend.interpreter.Executor

An abstract interface for executing Relay programs.

evaluate(expr, binds=None)

Evaluate a Relay expression on the executor.

Parameters:
  • expr (tvm.relay.Expr) – The expression to evaluate.
  • binds (Map[tvm.relay.Var, tvm.relay.Expr]) – Additional binding of free variable.
Returns:

val – The evaluation result.

Return type:

Union[function, Value]

class tvm.relay.backend.interpreter.Interpreter(mod, ctx, target)

Simple interpreter interface.

Parameters:
  • mod (tvm.relay.Module) – The module to support the execution.
  • ctx (tvm.TVMContext) – The runtime context to run the code on.
  • target (tvm.Target) – The target option to build the function.
optimize(expr)

Optimize an expr.

Parameters:expr (Expr) – The expression to be optimized.
Returns:opt_expr – The optimized expression.
Return type:Expr
class tvm.relay.backend.interpreter.RefValue(value)
class tvm.relay.backend.interpreter.TensorValue(data)

A Tensor value produced by the interpreter.

asnumpy()

Convert a Relay TensorValue into a numpy.ndarray.

class tvm.relay.backend.interpreter.TupleValue(*fields)

A tuple value produced by the interpreter.

class tvm.relay.backend.interpreter.Value

Base class of all values.

Backend code generation engine.

class tvm.relay.backend.compile_engine.CCacheKey(source_func, target)

Key in the CompileEngine.

Parameters:
  • source_func (tvm.relay.Function) – The source function.
  • target (tvm.Target) – The target we want to run the function on.
class tvm.relay.backend.compile_engine.CCacheValue

Value in the CompileEngine, including usage statistics.

class tvm.relay.backend.compile_engine.CachedFunc

Low-level tensor function to back a relay primitive function.

class tvm.relay.backend.compile_engine.CompileEngine

CompileEngine to get lowered code.

clear()

clear the existing cached functions

dump()

Return a string representation of engine dump.

Returns:dump – The dumped string representation
Return type:str
items()

List items in the cache.

Returns:item_list – The list of items.
Return type:List[Tuple[CCacheKey, CCacheValue]]
jit(source_func, target=None)

JIT a source_func to a tvm.Function.

Parameters:
  • source_func (Union[tvm.relay.Function, CCacheKey]) – The source relay function.
  • target (tvm.Target) – The target platform.
Returns:

cached_func – The result of lowering.

Return type:

CachedFunc

lower(source_func, target=None)

Lower a source_func to a CachedFunc.

Parameters:
  • source_func (Union[tvm.relay.Function, CCacheKey]) – The source relay function.
  • target (tvm.Target) – The target platform.
Returns:

cached_func – The result of lowering.

Return type:

CachedFunc

tvm.relay.backend.compile_engine.get()

Get the global compile engine.

Returns:engine – The compile engine.
Return type:tvm.relay.backend.CompileEngine

A compiler from a Relay expression to TVM’s graph runtime.

The compiler is built from a few pieces.

First we define a compiler from a single Relay expression to the graph langauge. We require the expression to be a function. The function’s parameters correpond to the placeholder/inputs and model parameters found in the computation graph representation. The body of the function represents the computation graph.

The compiler’s output is a program in the graph language, which is composed of graph langauge is composed of Node, NodeRef, InputNode, OpNode. This “little language” represents programs in TVM’s graph format.

To connect to the graph runtime, we use a printer that converts our graph format into TVM’s JSON format. The resulting string can be loaded by contrib.graph_runtime or any other TVM runtime comptatible system.

class tvm.relay.backend.graph_runtime_codegen.GraphRuntimeCodegen(mod, target)

The compiler from Relay to the TVM runtime system.

add_node(node, expr)

Add a node to the graph.

Parameters:
  • node (Node) – The node to add to the graph.
  • expr (tvm.relay.Expr) – The corresponding expression.
Returns:

node_ref – A reference to the node.

Return type:

Union[NodeRef, List[NodeRef]]

codegen(func)

Compile a single function into a graph.

Parameters:func (tvm.relay.Expr) – The function to compile.
Returns:
  • graph_json (str) – The graph json that can be consumed by runtime.
  • lowered_funcs (List[tvm.LoweredFunc] or Dict[str, List[tvm.LoweredFunc]]) – The lowered functions.
  • params (Dict[str, tvm.nd.NDArray]) – Additional constant parameters.
debug_dump_device_annotation(func)

Debug function to dump device annotation result.

debug_dump_memory_plan(func)

Debug function to dump memory plan.

visit_call(call)

Transform a ::tvm.relay.Call into an operator in the TVM graph.

visit_let(let)

Visit the let binding, by first traversing its value, then setting the metadata on the returned NodeRef.

Finally visit the body, and return the NodeRef corresponding to it.

Parameters:let (tvm.relay.Expr) – The let binding to transform.
Returns:ref – The node reference to the body.
Return type:NodeRef
class tvm.relay.backend.graph_runtime_codegen.InputNode(name, attrs)

An input node in the TVM runtime system graph input.

class tvm.relay.backend.graph_runtime_codegen.Node(name, attrs)

The base class for nodes in the TVM runtime system graph input.

class tvm.relay.backend.graph_runtime_codegen.NodeRef(ident, index=0, version=0)

A reference to a node, used for constructing the graph.

class tvm.relay.backend.graph_runtime_codegen.OpNode(name, attrs, op_name, inputs, op_attrs, num_outputs=1)

An operator node in the TVM runtime system”s graph input.

tvm.relay.backend.graph_runtime_codegen.shape_to_json(shape)

Convert symbolic shape to json compatible forma.