TOPI

TVM Operator Inventory.

TOPI is the operator collection library for TVM, to provide sugars for constructing compute declaration as well as optimized schedules.

Some of the schedule function may have been specially optimized for a specific workload.

List of operators

topi.identity(x)

Take identity of input x.

topi.negative(x)

Take negation of input x.

topi.floor(x)

Take floor of input x.

topi.ceil(x)

Take ceil of input x.

topi.sign(x)

Returns -1, 0, 1 based on sign of x.

topi.trunc(x)

Take truncated value of the input of x, element-wise.

topi.round(x)

Round elements of x to nearest integer.

topi.abs(x)

Take absolute value of the input of x, element-wise.

topi.exp(x)

Take exponential of input x.

topi.tanh(x)

Take hyperbolic tanh of input x.

topi.log(x)

Take logarithm of input x.

topi.sqrt(x)

Take square root of input x.

topi.rsqrt(x)

Take inverse square root of input x.

topi.sigmoid(x)

Take sigmoid tanh of input x.

topi.clip(x, a_min, a_max)

Clip (limit) the values in an array.

topi.cast(x, dtype)

Cast input to specified data type.

topi.reinterpret(x, dtype)

Reinterpret input to specified data type.

topi.transpose(a[, axes])

Permute the dimensions of an array.

topi.flip(a[, axis])

Flip/reverse elements of an array in a particular axis.

topi.strided_slice(a, begin, end[, strides])

Slice of an array.

topi.expand_dims(a, axis[, num_newaxis])

Expand the shape of an array.

topi.reshape(a, newshape)

Reshape the array

topi.squeeze(a[, axis])

Remove single-dimensional entries from the shape of an array.

topi.concatenate(a_tuple[, axis])

Join a sequence of arrays along an existing axis.

topi.split(ary, indices_or_sections[, axis])

Split an array into multiple sub-arrays.

topi.take(a, indices[, axis, mode])

Take elements from an array along an axis.

topi.gather_nd(a, indices)

Gather elements from a n-dimension array..

topi.full(shape, dtype, fill_value)

Fill tensor with fill_value

topi.full_like(x, fill_value)

Construct a tensor with same shape as input tensor,

topi.nn.relu(x)

Take relu of input x.

topi.nn.leaky_relu(x, alpha)

Take leaky relu of input x.

topi.nn.dilate(data, strides[, name])

Dilate data with zeros.

topi.nn.pool(data, kernel, stride, padding, …)

Perform pooling on height and width dimension of data.

topi.nn.global_pool(data, pool_type[, layout])

Perform global pooling on height and width dimension of data.

topi.nn.adaptive_pool(data, output_size, …)

Perform pooling on height and width dimension of data.

topi.nn.upsampling(data, scale[, layout, …])

Perform upsampling on the data.

topi.nn.softmax(x[, axis])

Perform softmax activation on the data

topi.nn.dense(data, weight[, bias, out_dtype])

Applies a linear transformation: \(Y = XW^T + b\).

topi.nn.batch_matmul(x, y)

Computes batch matrix multiplication of x and y when x and y are data in batch.

topi.nn.log_softmax(x)

Perform log softmax activation on the data

topi.nn.conv2d_nchw(Input, Filter, stride, …)

Convolution operator in NCHW layout.

topi.nn.conv2d_hwcn(Input, Filter, stride, …)

Convolution operator in HWCN layout.

topi.nn.depthwise_conv2d_nchw(Input, Filter, …)

Depthwise convolution nchw forward operator.

topi.nn.depthwise_conv2d_nhwc(Input, Filter, …)

Depthwise convolution nhwc forward operator.

topi.max(data[, axis, keepdims])

Maximum of array elements over a given axis or a list of axes

topi.sum(data[, axis, keepdims])

Sum of array elements over a given axis or a list of axes

topi.min(data[, axis, keepdims])

Minimum of array elements over a given axis or a list of axes

topi.argmax(data[, axis, keepdims])

Returns the indices of the maximum values along an axis.

topi.argmin(data[, axis, keepdims])

Returns the indices of the minimum values along an axis.

topi.prod(data[, axis, keepdims])

Product of array elements over a given axis or a list of axes

topi.broadcast_to(data, shape)

Broadcast the src to the target shape

topi.add(lhs, rhs)

Addition with auto-broadcasting

topi.subtract(lhs, rhs)

Subtraction with auto-broadcasting

topi.multiply(lhs, rhs)

Multiplication with auto-broadcasting

topi.divide(lhs, rhs)

Division with auto-broadcasting

topi.mod(lhs, rhs)

Modulus with auto-broadcasting

topi.maximum(lhs, rhs)

Take element-wise maximum of two tensors with auto-broadcasting

topi.minimum(lhs, rhs)

Take element-wise maximum of two tensors with auto-broadcasting

topi.power(lhs, rhs)

Power with auto-broadcasting

topi.greater(lhs, rhs)

Compute (lhs>rhs) with auto-broadcasting

topi.less(lhs, rhs)

Compute (lhs<rhs) with auto-broadcasting

topi.equal(lhs, rhs)

Compute (lhs==rhs) with auto-broadcasting

topi.not_equal(lhs, rhs)

Compute (lhs!=rhs) with auto-broadcasting

topi.greater_equal(lhs, rhs)

Compute (lhs>=rhs) with auto-broadcasting

topi.less_equal(lhs, rhs)

Compute (lhs<=rhs) with auto-broadcasting

topi.all(data[, axis, keepdims])

Logical AND of array elements over a given axis or a list of axes

topi.logical_and

topi.logical_or

topi.logical_not

topi.arange(start[, stop, step, dtype])

Creates a tensor with evenly spaced values within a given interval.

topi.stack(a, axis)

Repeats the whole array multiple times.

topi.repeat(a, repeats, axis)

Repeats elements of an array.

topi.tile(a, reps)

Repeats the whole array multiple times.

topi.shape(array[, dtype])

Get the shape of input array

topi.ndarray_size(array[, dtype])

Get the number of elements of input array

topi.layout_transform(array, src_layout, …)

Transform the layout according to src_layout and dst_layout

topi.image.resize(data, size[, layout, …])

Perform resize operation on the data.

topi.argsort(data[, valid_count, axis, …])

Performs sorting along the given axis and returns an array of indices having the same shape as an input array that index data in sorted order.

topi.topk(data[, k, axis, ret_type, …])

Get the top k elements in an input tensor along the given axis.

topi.sequence_mask(data, valid_length[, …])

Sets all elements outside the expected length of the sequence to a constant value.

topi.one_hot(indices, on_value, off_value, …)

Returns a one-hot tensor where the locations repsented by indices take value on_value, other locations take value off_value.

List of schedules

topi.generic.schedule_conv2d_nchw(outs)

Schedule for conv2d_nchw

topi.generic.schedule_depthwise_conv2d_nchw(outs)

Schedule for depthwise_conv2d_nchw

topi.generic.schedule_reduce(outs)

Schedule for reduction

topi.generic.schedule_broadcast(outs)

Schedule for injective op.

topi.generic.schedule_injective(outs)

Schedule for injective op.

topi

topi.negative(x)

Take negation of input x.

Parameters

x (tvm.Tensor) – Input argument.

Returns

y – The result.

Return type

tvm.Tensor

topi.identity(x)

Take identity of input x.

Parameters

x (tvm.Tensor) – Input argument.

Returns

y – The result.

Return type

tvm.Tensor

topi.floor(x)

Take floor of input x.

Parameters

x (tvm.Tensor) – Input argument.

Returns

y – The result.

Return type

tvm.Tensor

topi.ceil(x)

Take ceil of input x.

Parameters

x (tvm.Tensor) – Input argument.

Returns

y – The result.

Return type

tvm.Tensor

topi.sign(x)

Returns -1, 0, 1 based on sign of x.

Parameters

x (tvm.Tensor) – Input argument.

Returns

y – The result.

Return type

tvm.Tensor

topi.trunc(x)

Take truncated value of the input of x, element-wise.

Parameters

x (tvm.Tensor) – Input argument.

Returns

y – The result.

Return type

tvm.Tensor

topi.round(x)

Round elements of x to nearest integer.

Parameters

x (tvm.Tensor) – Input argument.

Returns

y – The result.

Return type

tvm.Tensor

topi.abs(x)

Take absolute value of the input of x, element-wise.

Parameters

x (tvm.Tensor) – Input argument.

Returns

y – The result.

Return type

tvm.Tensor

topi.exp(x)

Take exponential of input x.

Parameters

x (tvm.Tensor) – Input argument.

Returns

y – The result.

Return type

tvm.Tensor

topi.tanh(x)

Take hyperbolic tanh of input x.

Parameters

x (tvm.Tensor) – Input argument.

Returns

y – The result.

Return type

tvm.Tensor

topi.log(x)

Take logarithm of input x.

Parameters

x (tvm.Tensor) – Input argument.

Returns

y – The result.

Return type

tvm.Tensor

topi.sqrt(x)

Take square root of input x.

Parameters

x (tvm.Tensor) – Input argument.

Returns

y – The result.

Return type

tvm.Tensor

topi.rsqrt(x)

Take inverse square root of input x.

Parameters

x (tvm.Tensor) – Input argument.

Returns

y – The result.

Return type

tvm.Tensor

topi.sigmoid(x)

Take sigmoid tanh of input x.

Parameters

x (tvm.Tensor) – Input argument.

Returns

y – The result.

Return type

tvm.Tensor

topi.clip(x, a_min, a_max)

Clip (limit) the values in an array. Given an interval, values outside the interval are clipped to the interval edges.

Parameters
  • x (tvm.Tensor) – Input argument.

  • a_min (int or float) – Minimum value.

  • a_max (int or float) – Maximum value.

Returns

y – The result.

Return type

tvm.Tensor

topi.cast(x, dtype)

Cast input to specified data type.

Parameters
  • x (tvm.Tensor or Expr) – Input argument.

  • dtype (str) – Data type.

Returns

y – The result.

Return type

tvm.Tensor

topi.reinterpret(x, dtype)

Reinterpret input to specified data type.

Parameters
  • x (tvm.Tensor) – Input argument.

  • dtype (str) – Data type.

Returns

y – The result.

Return type

tvm.Tensor

topi.transpose(a, axes=None)

Permute the dimensions of an array.

Parameters
  • a (tvm.Tensor) – The tensor to be expanded.

  • axes (tuple of ints, optional) – By default, reverse the dimensions.

Returns

ret

Return type

tvm.Tensor

topi.flip(a, axis=0)

Flip/reverse elements of an array in a particular axis.

Parameters
  • a (tvm.Tensor) – The tensor to be expanded.

  • axis (int, optional) – The axis along which the tensors will be reveresed.

Returns

ret

Return type

tvm.Tensor

topi.strided_slice(a, begin, end, strides=None)

Slice of an array.

Parameters
  • a (tvm.Tensor) – The tensor to be sliced.

  • begin (list of int) – The indices to begin with in the slicing.

  • end (list of int) – Indicies indicating end of the slice.

  • strides (list of int, optional) – Specifies the stride values, it can be negative in that case, the input tensor will be reversed in that particular axis.

Returns

ret

Return type

tvm.Tensor

topi.expand_dims(a, axis, num_newaxis=1)

Expand the shape of an array.

Parameters
  • a (tvm.Tensor) – The tensor to be expanded.

  • num_newaxis (int, optional) – Number of newaxis to be inserted on axis

Returns

ret

Return type

tvm.Tensor

topi.reshape(a, newshape)

Reshape the array

Parameters
  • a (tvm.Tensor) – The tensor to be reshaped

  • newshape (tuple of ints) – The new shape

Returns

ret

Return type

tvm.Tensor

topi.squeeze(a, axis=None)

Remove single-dimensional entries from the shape of an array.

Parameters
  • a (tvm.Tensor) –

  • axis (None or int or tuple of ints, optional) – Selects a subset of the single-dimensional entries in the shape. If an axis is selected with shape entry greater than one, an error is raised.

Returns

squeezed

Return type

tvm.Tensor

topi.concatenate(a_tuple, axis=0)

Join a sequence of arrays along an existing axis.

Parameters
  • a_tuple (tuple of tvm.Tensor) – The arrays to concatenate

  • axis (int, optional) – The axis along which the arrays will be joined. Default is 0.

Returns

ret

Return type

tvm.Tensor

topi.split(ary, indices_or_sections, axis=0)

Split an array into multiple sub-arrays.

Parameters
  • ary (tvm.Tensor) –

  • indices_or_sections (int or 1-D array) –

  • axis (int) –

Returns

ret

Return type

tuple of tvm.Tensor

topi.take(a, indices, axis=None, mode='clip')

Take elements from an array along an axis.

Parameters
  • a (tvm.Tensor) – The source array.

  • indices (tvm.Tensor) – The indices of the values to extract.

  • axis (int, optional) – The axis over which to select values. By default, the flattened input array is used.

  • mode (str, optional) – Specifies how out-of-bound indices will behave. clip - clip to the range (default) wrap - wrap around the indices fast - no clip or wrap around (user must make sure indices are in-bound)

Returns

ret

Return type

tvm.Tensor

topi.gather_nd(a, indices)

Gather elements from a n-dimension array..

Parameters
  • a (tvm.Tensor) – The source array.

  • indices (tvm.Tensor) – The indices of the values to extract.

Returns

ret

Return type

tvm.Tensor

topi.full(shape, dtype, fill_value)

Fill tensor with fill_value

Parameters
  • shape (tuple) – Input tensor shape.

  • dtype (str) – Data type

  • fill_value (float) – Value to be filled

Returns

y – The result.

Return type

tvm.Tensor

topi.full_like(x, fill_value)
Construct a tensor with same shape as input tensor,

then fill tensor with fill_value.

Parameters
  • x (tvm.Tensor) – Input argument.

  • fill_value (float) – Value to be filled

Returns

y – The result.

Return type

tvm.Tensor

topi.all(data, axis=None, keepdims=False)

Logical AND of array elements over a given axis or a list of axes

Parameters
  • data (tvm.Tensor) – The input tvm boolean tensor

  • axis (None or int or tuple of int) – Axis or axes along which a logical AND is performed. The default, axis=None, will perform logical AND over all elements of the input array. If axis is negative it counts from the last to the first axis.

  • keepdims (bool) – If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array.

Returns

ret

Return type

tvm.Tensor

topi.max(data, axis=None, keepdims=False)

Maximum of array elements over a given axis or a list of axes

Parameters
  • data (tvm.Tensor) – The input tvm tensor

  • axis (None or int or tuple of int) – Axis or axes along which the max operation is performed. The default, axis=None, will find the max element from all of the elements of the input array. If axis is negative it counts from the last to the first axis.

  • keepdims (bool) – If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array.

Returns

ret

Return type

tvm.Tensor

topi.sum(data, axis=None, keepdims=False)

Sum of array elements over a given axis or a list of axes

Parameters
  • data (tvm.Tensor) – The input tvm tensor

  • axis (None or int or tuple of int) – Axis or axes along which a sum is performed. The default, axis=None, will sum all of the elements of the input array. If axis is negative it counts from the last to the first axis.

  • keepdims (bool) – If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array.

Returns

ret

Return type

tvm.Tensor

topi.min(data, axis=None, keepdims=False)

Minimum of array elements over a given axis or a list of axes

Parameters
  • data (tvm.Tensor) – The input tvm tensor

  • axis (None or int or tuple of int) – Axis or axes along which a minimum operation is performed. The default, axis=None, will find the minimum element from all of the elements of the input array. If axis is negative it counts from the last to the first axis.

  • keepdims (bool) – If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array.

Returns

ret

Return type

tvm.Tensor

topi.prod(data, axis=None, keepdims=False)

Product of array elements over a given axis or a list of axes

Parameters
  • data (tvm.Tensor) – The input tvm tensor

  • axis (None or int or tuple of int) – Axis or axes along which a prod operation is performed. The default, axis=None, will get the prod element over all of the elements of the input array. If axis is negative it counts from the last to the first axis.

  • keepdims (bool) – If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array.

Returns

ret

Return type

tvm.Tensor

topi.broadcast_to(data, shape)

Broadcast the src to the target shape

We follows the numpy broadcasting rule. See also https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html

Parameters
  • data (tvm.Tensor) – The input data

  • shape (list or tuple) – The target shape to be broadcasted.

Returns

ret

Return type

tvm.Tensor

topi.add(lhs, rhs)

Addition with auto-broadcasting

Parameters
  • lhs (tvm.Tensor or Expr) – The left operand

  • rhs (tvm.Tensor or Expr) – The right operand

Returns

ret – Returns Expr if both operands are Expr. Otherwise returns Tensor.

Return type

tvm.Tensor or Expr

topi.subtract(lhs, rhs)

Subtraction with auto-broadcasting

Parameters
  • lhs (tvm.Tensor or Expr) – The left operand

  • rhs (tvm.Tensor or Expr) – The right operand

Returns

ret – Returns Expr if both operands are Expr. Otherwise returns Tensor.

Return type

tvm.Tensor or Expr

topi.multiply(lhs, rhs)

Multiplication with auto-broadcasting

Parameters
  • lhs (tvm.Tensor or Expr) – The left operand

  • rhs (tvm.Tensor or Expr) – The right operand

Returns

ret – Returns Expr if both operands are Expr. Otherwise returns Tensor.

Return type

tvm.Tensor or Expr

topi.divide(lhs, rhs)

Division with auto-broadcasting

Parameters
  • lhs (tvm.Tensor or Expr) – The left operand

  • rhs (tvm.Tensor or Expr) – The right operand

Returns

ret – Returns Expr if both operands are Expr. Otherwise returns Tensor.

Return type

tvm.Tensor or Expr

topi.mod(lhs, rhs)

Modulus with auto-broadcasting

Parameters
  • lhs (tvm.Tensor or Expr) – The left operand

  • rhs (tvm.Tensor or Expr) – The right operand

Returns

ret – Returns Expr if both operands are Expr. Otherwise returns Tensor.

Return type

tvm.Tensor or Expr

topi.maximum(lhs, rhs)

Take element-wise maximum of two tensors with auto-broadcasting

Parameters
  • lhs (tvm.Tensor or Expr) – The left operand

  • rhs (tvm.Tensor or Expr) – The right operand

Returns

ret – Returns Expr if both operands are Expr. Otherwise returns Tensor.

Return type

tvm.Tensor or Expr

topi.minimum(lhs, rhs)

Take element-wise maximum of two tensors with auto-broadcasting

Parameters
  • lhs (tvm.Tensor or Expr) – The left operand

  • rhs (tvm.Tensor or Expr) – The right operand

Returns

ret – Returns Expr if both operands are Expr. Otherwise returns Tensor.

Return type

tvm.Tensor or Expr

topi.power(lhs, rhs)

Power with auto-broadcasting

Parameters
  • lhs (tvm.Tensor or Expr) – The left operand

  • rhs (tvm.Tensor or Expr) – The right operand

Returns

ret – Returns Expr if both operands are Expr. Otherwise returns Tensor.

Return type

tvm.Tensor or Expr

topi.greater(lhs, rhs)

Compute (lhs>rhs) with auto-broadcasting

Parameters
  • lhs (tvm.Tensor or Expr) – The left operand

  • rhs (tvm.Tensor or Expr) – The right operand

Returns

ret – Returns Expr if both operands are Expr. Otherwise returns Tensor.

Return type

tvm.Tensor or Expr

topi.less(lhs, rhs)

Compute (lhs<rhs) with auto-broadcasting

Parameters
  • lhs (tvm.Tensor or Expr) – The left operand

  • rhs (tvm.Tensor or Expr) – The right operand

Returns

ret – Returns Expr if both operands are Expr. Otherwise returns Tensor.

Return type

tvm.Tensor or Expr

topi.arange(start, stop=None, step=1, dtype='float32')

Creates a tensor with evenly spaced values within a given interval.

Parameters
  • start (tvm.Expr, optional) – Start of interval. The interval includes this value. The default start value is 0.

  • stop (tvm.Expr) – Stop of interval. The interval does not include this value.

  • step (tvm.Expr, optional) – Spacing between values. The default step size is 1.

  • dtype (str, optional) – The target data type.

Returns

result – The resulting tensor.

Return type

tvm.Tensor

topi.stack(a, axis)

Repeats the whole array multiple times.

Parameters
  • a (tvm.Tensor) – The tensor to be stacked.

  • axis (int, optional) – The axis in the result array along which the input arrays are stacked.

Returns

ret

Return type

tvm.Tensor

topi.repeat(a, repeats, axis)

Repeats elements of an array.

Parameters
  • a (tvm.Tensor) – The tensor to be repeated.

  • repeats (int, required) – Number of repetitions for each element

  • axis (int, optional) – The axis along which to repeat values

Returns

ret

Return type

tvm.Tensor

topi.tile(a, reps)

Repeats the whole array multiple times.

Parameters
  • a (tvm.Tensor) – The tensor to be tiled.

  • reps (tuple of ints, required) – The number of times for repeating the tensor

Returns

ret

Return type

tvm.Tensor

topi.shape(array, dtype='int32')

Get the shape of input array

Parameters
  • array (tvm.Tensor) – The source tensor.

  • dtype (str, optional) – The target data type.

Returns

result – The resulting tensor.

Return type

tvm.Tensor

topi.ndarray_size(array, dtype='int32')

Get the number of elements of input array

Parameters
  • array (tvm.Tensor) – The source tensor.

  • dtype (str, optional) – The target data type.

Returns

result – The resulting tensor.

Return type

tvm.Tensor

topi.layout_transform(array, src_layout, dst_layout)

Transform the layout according to src_layout and dst_layout

Parameters
  • array (tvm.Tensor) – The source array.

  • src_layout (str) – the source layout.

  • dst_layout (str) – the destination layout.

topi.argsort(data, valid_count=None, axis=-1, is_ascend=1, dtype='float32')

Performs sorting along the given axis and returns an array of indices having the same shape as an input array that index data in sorted order.

Parameters
  • data (tvm.Tensor) – The input tensor.

  • valid_count (tvm.Tensor, optional) – 1-D tensor for valid number of boxes only for ssd.

  • axis (int, optional) – Axis along which to sort the input tensor. By default the flattened array is used.

  • is_ascend (boolean, optional) – Whether to sort in ascending or descending order.

  • dtype (string, optional) – DType of the output indices.

Returns

out – Sorted index tensor.

Return type

tvm.Tensor

Example

# An example to use argsort
dshape = (1, 5, 6)
data = tvm.placeholder(dshape, name="data")
axis = 0
is_ascend = False
out = argsort(data, axis=axis, is_ascend=is_ascend)
np_data = np.random.uniform(dshape)
s = topi.generic.schedule_argsort(out)
f = tvm.build(s, [data, out], "llvm")
ctx = tvm.cpu()
tvm_data = tvm.nd.array(np_data, ctx)
tvm_out = tvm.nd.array(np.zeros(dshape, dtype=data.dtype), ctx)
f(tvm_data, tvm_out)
topi.topk(data, k=1, axis=-1, ret_type='both', is_ascend=False, dtype='int64')

Get the top k elements in an input tensor along the given axis.

Parameters
  • data (tvm.Tensor) – The input tensor.

  • k (int, optional) – Number of top elements to select. Return all elements if k < 1.

  • axis (int, optional) – Axis long which to sort the input tensor.

  • ret_type (str, optional) – The return type [both, values, indices]. “both”: return both top k data and indices. “values”: return top k data only. “indices”: return top k indices only.

  • is_ascend (boolean, optional) – Whether to sort in ascending or descending order.

  • dtype (string, optional) – The data type of the indices output.

Returns

out – The computed result.

Return type

tvm.Tensor or List[tvm.Tensor]

topi.sequence_mask(data, valid_length, mask_value=0, axis=0)

Sets all elements outside the expected length of the sequence to a constant value.

This function takes an n-dimensional input array of the form [MAX_LENGTH, batch_size, …] or [batch_size, MAX_LENGTH, …] and returns an array of the same shape.

axis means the axis of the length dimension and can only be 0 or 1. If axis is 0, the data must have shape [MAX_LENGTH, batch_size, …]. Otherwise (axis=1), the data must have shape [batch_size, MAX_LENGTH, …].

valid_length gives the length of each sequence. valid_length should be a 1D int array with positive ints and has dimension [batch_size,].

Parameters
  • data (tvm.Tensor) – N-D with shape [MAX_LENGTH, batch_size, …] or [batch_size, MAX_LENGTH, …] depending on the value of axis.

  • valid_length (tvm.Tensor) – 1-D with shape [batch_size,]

  • mask_value (float, optional) – The masking value, default 0

  • axis (int, optional) – axis of the length dimension, must be 0 or 1, default 0

Returns

output – N-D with shape [MAX_LENGTH, batch_size, …] or [batch_size, MAX_LENGTH, …] depending on the value of axis.

Return type

tvm.Tensor

topi.one_hot(indices, on_value, off_value, depth, axis, dtype)

Returns a one-hot tensor where the locations repsented by indices take value on_value, other locations take value off_value. Final dimension is <indices outer dimensions> x depth x <indices inner dimensions>.

Parameters
  • indices (tvm.Tensor) – Locations to set to on_value.

  • on_value (tvm.Tensor) – Value to fill at indices.

  • off_value (tvm.Tensor) – Value to fill at all other positions besides indices.

  • depth (int) – Depth of the one-hot dimension.

  • axis (int) – Axis to fill.

  • dtype (relay.DataType) – Data type of the output tensor.

Returns

ret – The one-hot tensor.

Return type

relay.Expr

Examples

indices = [0, 1, 2]

relay.one_hot(indices, 3) =
    [[1, 0, 0],
     [0, 1, 0],
     [0, 0, 1]]

topi.nn

topi.nn.relu(x)

Take relu of input x.

Parameters

x (tvm.Tensor) – Input argument.

Returns

y – The result.

Return type

tvm.Tensor

topi.nn.leaky_relu(x, alpha)

Take leaky relu of input x.

Parameters
  • x (tvm.Tensor) – Input argument.

  • alpha (float) – The slope for the small gradient when x < 0

Returns

y – The result.

Return type

tvm.Tensor

topi.nn.dilate(data, strides, name='DilatedInput')

Dilate data with zeros.

Parameters
  • data (tvm.Tensor) – n-D, can be any layout.

  • strides (list / tuple of n ints) – Dilation stride on each dimension, 1 means no dilation.

  • name (str, optional) – The name prefix operators generated

Returns

Output – n-D, the same layout as data.

Return type

tvm.Tensor

topi.nn.pool(data, kernel, stride, padding, pool_type, ceil_mode=False, layout='NCHW', count_include_pad=True)
Perform pooling on height and width dimension of data.

It decides the height and width dimension according to the layout string, in which ‘W’ and ‘H’ means width and height respectively. Width and height dimension cannot be split. For example, NCHW, NCHW16c, etc. are valid for pool, while NCHW16w, NCHW16h are not. See parameter layout for more information of the layout string convention.

Parameters
  • data (tvm.Tensor) – n-D with shape of layout

  • kernel (list/tuple of two ints) – Kernel size, [kernel_height, kernel_width]

  • stride (list/tuple of two ints) – Stride size, [stride_height, stride_width]

  • padding (list/tuple of four ints) – Pad size, [pad_top, pad_left, pad_bottom, pad_right]]

  • pool_type (str) – Pool type, ‘max’ or ‘avg’

  • ceil_mode (bool) – Whether to use ceil when calculating output size.

  • layout (string) – Layout of the input data. The layout is supposed to be composed of upper cases, lower cases and numbers, where upper case indicates a dimension and the corresponding lower case with factor size indicates the split dimension. For example, NCHW16c can describe a 5-D tensor of [batch_size, channel, height, width, channel_block], in which channel_block=16 is a split of dimension channel.

  • count_include_pad (bool) – Whether include padding in the calculation when pool_type is ‘avg’

Returns

output – n-D in the same layout

Return type

tvm.Tensor

topi.nn.global_pool(data, pool_type, layout='NCHW')
Perform global pooling on height and width dimension of data.

It decides the height and width dimension according to the layout string, in which ‘W’ and ‘H’ means width and height respectively. Width and height dimension cannot be split. For example, NCHW, NCHW16c, etc. are valid for pool, while NCHW16w, NCHW16h are not. See parameter layout for more information of the layout string convention.

Parameters
  • data (tvm.Tensor) – n-D with shape of layout

  • pool_type (str) – Pool type, ‘max’ or ‘avg’

  • layout (str) – Layout of the input data. The layout is supposed to be composed of upper cases, lower cases and numbers, where upper case indicates a dimension and the corresponding lower case with factor size indicates the split dimension. For example, NCHW16c can describe a 5-D tensor of [batch_size, channel, height, width, channel_block], in which channel_block=16 is a split of dimension channel.

Returns

output – n-D in same layout with height and width dimension size of 1. e.g., for NCHW, the output shape will be [batch, channel, 1, 1]

Return type

tvm.Tensor

topi.nn.upsampling(data, scale, layout='NCHW', method='nearest_neighbor', align_corners=False)
Perform upsampling on the data.

Nearest neighbor and bilinear upsampling are supported.

Parameters
  • inputs (tvm.Tensor) – inputs is a 4-D tensor with shape [batch, channel, in_height, in_width] or [batch, in_height, in_width, channel]

  • scale (int) – Scaling factor

  • layout (string, optional) – either “NCHW” or “NHWC”

  • method ({"bilinear", "nearest_neighbor", "bicubic"}) – Method to be used for upsampling.

Returns

output – 4-D with shape [batch, channel, in_height*scale, in_width*scale] or [batch, in_height*scale, in_width*scale, channel]

Return type

tvm.Tensor

topi.nn.softmax(x, axis=-1)

Perform softmax activation on the data

Parameters
  • data (tvm.Tensor) – can be any dimension

  • axis (int) – channel axis

Returns

output – output shape is the same as input

Return type

tvm.Tensor

topi.nn.dense(data, weight, bias=None, out_dtype=None)

Applies a linear transformation: \(Y = XW^T + b\).

Parameters
  • data (tvm.Tensor) – 2-D with shape [batch, in_dim]

  • weight (tvm.Tensor) – 2-D with shape [out_dim, in_dim]

  • bias (tvm.Tensor, optional) – 1-D with shape [out_dim]

  • out_dtype (str) – The output type. This is used for mixed precision.

Returns

output – 2-D with shape [batch, out_dim]

Return type

tvm.Tensor

topi.nn.batch_matmul(x, y)

Computes batch matrix multiplication of x and y when x and y are data in batch.

Parameters
  • x (tvm.Tensor) – 3-D with shape [batch, M, K]

  • y (tvm.Tensor) – 3-D with shape [batch, N, K]

Returns

output – 3-D with shape [batch, M, N]

Return type

tvm.Tensor

topi.nn.log_softmax(x)

Perform log softmax activation on the data

Parameters

data (tvm.Tensor) – 2-D input data

Returns

output – 2-D output with same shape

Return type

tvm.Tensor

topi.nn.conv2d_nchw(Input, Filter, stride, padding, dilation, out_dtype=None)

Convolution operator in NCHW layout.

Parameters
  • Input (tvm.Tensor) – 4-D with shape [batch, in_channel, in_height, in_width]

  • Filter (tvm.Tensor) – 4-D with shape [num_filter, in_channel, filter_height, filter_width]

  • stride (int or a list/tuple of two ints) – Stride size, or [stride_height, stride_width]

  • padding (int or str) – Padding size, or [‘VALID’, ‘SAME’]

  • dilation (int or a list/tuple of two ints) – dilation size, or [dilation_height, dilation_width]

Returns

Output – 4-D with shape [batch, out_channel, out_height, out_width]

Return type

tvm.Tensor

topi.nn.conv2d_hwcn(Input, Filter, stride, padding, dilation, out_dtype=None)

Convolution operator in HWCN layout.

Parameters
  • Input (tvm.Tensor) – 4-D with shape [in_height, in_width, in_channel, batch]

  • Filter (tvm.Tensor) – 4-D with shape [filter_height, filter_width, in_channel, num_filter]

  • stride (int or a list/tuple of two ints) – Stride size, or [stride_height, stride_width]

  • padding (int or str) – Padding size, or [‘VALID’, ‘SAME’]

  • dilation (int or a list/tuple of two ints) – dilation size, or [dilation_height, dilation_width]

Returns

output – 4-D with shape [out_height, out_width, out_channel, batch]

Return type

tvm.Tensor

topi.nn.depthwise_conv2d_nchw(Input, Filter, stride, padding, dilation, out_dtype=None)

Depthwise convolution nchw forward operator.

Parameters
  • Input (tvm.Tensor) – 4-D with shape [batch, in_channel, in_height, in_width]

  • Filter (tvm.Tensor) – 4-D with shape [in_channel, channel_multiplier, filter_height, filter_width]

  • stride (tuple of two ints) – The spatial stride along height and width

  • padding (int or str) – Padding size, or [‘VALID’, ‘SAME’]

  • dilation (int or a list/tuple of two ints) – dilation size, or [dilation_height, dilation_width]

  • out_dtype (str, optional) – Output data type

Returns

Output – 4-D with shape [batch, out_channel, out_height, out_width]

Return type

tvm.Tensor

topi.nn.depthwise_conv2d_nhwc(Input, Filter, stride, padding, dilation, out_dtype=None)

Depthwise convolution nhwc forward operator.

Parameters
  • Input (tvm.Tensor) – 4-D with shape [batch, in_height, in_width, in_channel]

  • Filter (tvm.Tensor) – 4-D with shape [filter_height, filter_width, in_channel, channel_multiplier]

  • stride (tuple of two ints) – The spatial stride along height and width

  • padding (int or str) – Padding size, or [‘VALID’, ‘SAME’]

  • dilation (int or a list/tuple of two ints) – dilation size, or [dilation_height, dilation_width]

  • out_dtype (str, optional) – Output data type

Returns

Output – 4-D with shape [batch, out_height, out_width, out_channel]

Return type

tvm.Tensor

topi.image

topi.image.resize(data, size, layout='NCHW', method='bilinear', align_corners=True, out_dtype=None)

Perform resize operation on the data.

Parameters
  • inputs (tvm.Tensor) – inputs is a 4-D tensor with shape [batch, channel, in_height, in_width] or [batch, in_height, in_width, channel]

  • size (Tuple) – Output resolution scale to

  • layout (string, optional) – “NCHW”, “NHWC”, or “NCHWc”.

  • align_corners (Boolean, optional) – To preserve the values at the corner pixels.

  • method ({"bilinear", "nearest_neighbor", "bicubic"}) – Method to be used for resizing.

  • out_dtype (string, optional) – Type to return. If left None will be same as input type.

Returns

output – 4-D with shape [batch, channel, in_height*scale, in_width*scale] or [batch, in_height*scale, in_width*scale, channel] or 5-D with shape [batch, channel-major, in_height*scale, in_width*scale, channel-minor]

Return type

tvm.Tensor

topi.sparse

topi.sparse.csrmv(a, x, y=None)

The csrmv routine performs a matrix-vector operation defined as \(y := A*x + y\), where x and y are vectors, A is an m-by-k sparse matrix in the CSR format.

Parameters

atvm.contrib.sparse.CSRNDArray

2-D sparse matrix with shape [m, k]

xtvm.Tensor

2-D dense matrix with shape [k, 1]

ytvm.Tensor, optional

1-D dense vector with shape [1]

Returns

output – 2-D dense matrix with shape [m, 1]

Return type

tvm.Tensor

topi.sparse.csrmm(a, b, c=None)

The csrmm routine performs a matrix-matrix operation defined as \(C := A*B + C\), where B and C are dense matrices, A is an m-by-k sparse matrix in the CSR format.

Parameters
  • a (tvm.contrib.sparse.CSRNDArray) – 2-D sparse matrix with shape [m, k]

  • b (tvm.Tensor) – 2-D dense matrix with shape [k, n]

  • c (tvm.Tensor, optional) – 1-D dense vector with shape [n]

Returns

output – 2-D with shape [m, n]

Return type

tvm.Tensor

topi.sparse.dense(data, weight, bias=None)

Applies a linear transformation: \(Y = XW^T + b\). Either data or weight should be tvm.contrib.sparse.CSRNDArray.

Parameters
Returns

output – 2-D with shape [batch, out_dim]

Return type

tvm.Tensor

topi.generic

Generic declaration and schedules.

This is a recommended way of using TOPI API. To use the generic schedule function, user must set the current target scope using with block. See also tvm.target

Example

# create schedule that dispatches to topi.cuda.schedule_injective
with tvm.target.create("cuda"):
  s = tvm.generic.schedule_injective(outs)
topi.generic.schedule_conv2d_nchw(outs)

Schedule for conv2d_nchw

Parameters

outs (Array of Tensor) – The computation graph description of conv2d_nchw in the format of an array of tensors.

Returns

sch – The computation schedule for the op.

Return type

Schedule

topi.generic.schedule_depthwise_conv2d_nchw(outs)

Schedule for depthwise_conv2d_nchw

Parameters

outs (Array of Tensor) – The computation graph description of depthwise_conv2d_nchw in the format of an array of tensors.

Returns

sch – The computation schedule for the op.

Return type

Schedule

topi.generic.schedule_reduce(outs)

Schedule for reduction

Parameters

outs (Array of Tensor) – The computation graph description of reduce in the format of an array of tensors.

Returns

sch – The computation schedule for the op.

Return type

Schedule

topi.generic.schedule_broadcast(outs)

Schedule for injective op.

Parameters

outs (Array of Tensor) – The computation graph description of reduce in the format of an array of tensors.

Returns

sch – The computation schedule for the op.

Return type

Schedule

topi.generic.schedule_injective(outs)

Schedule for injective op.

Parameters

outs (Array of Tensor) – The computation graph description of reduce in the format of an array of tensors.

Returns

sch – The computation schedule for the op.

Return type

Schedule