TOPI

TVM Operator Inventory.

TOPI is the operator collection library for TVM, to provide sugars for constructing compute declaration as well as optimized schedules.

Some of the schedule function may have been specially optimized for a specific workload.

List of operators

topi.identity(x) Take identity of input x.
topi.negative(x) Take negation of input x.
topi.floor(x) Take floor of input x.
topi.ceil(x) Take ceil of input x.
topi.trunc(x) Take truncated value of the input of x, element-wise.
topi.round(x) Round elements of x to nearest integer.
topi.abs(x) Take absolute value of the input of x, element-wise.
topi.exp(x) Take exponential of input x.
topi.tanh(x) Take hyperbolic tanh of input x.
topi.log(x) Take logarithm of input x.
topi.sqrt(x) Take square root of input x.
topi.sigmoid(x) Take sigmoid tanh of input x.
topi.clip(x, a_min, a_max) Clip (limit) the values in an array.
topi.cast(x, dtype) Cast input to specified data type.
topi.transpose(a[, axes]) Permute the dimensions of an array.
topi.flip(a[, axis]) Flip/reverse elements of an array in a particular axis.
topi.strided_slice(a, begin, end[, strides]) Slice of an array.
topi.expand_dims(a, axis[, num_newaxis]) Expand the shape of an array.
topi.reshape(a, newshape) Reshape the array
topi.squeeze(a[, axis]) Remove single-dimensional entries from the shape of an array.
topi.concatenate(a_tuple[, axis]) Join a sequence of arrays along an existing axis.
topi.split(ary, indices_or_sections[, axis]) Split an array into multiple sub-arrays.
topi.take(a, indices[, axis]) Take elements from an array along an axis.
topi.gather_nd(a, indices) Gather elements from a n-dimension array..
topi.full(shape, dtype, fill_value) Fill tensor with fill_value
topi.full_like(x, fill_value) Construct a tensor with same shape as input tensor,
topi.nn.relu(x) Take relu of input x.
topi.nn.leaky_relu(x, alpha) Take leaky relu of input x.
topi.nn.dilate(data, strides[, name]) Dilate data with zeros.
topi.nn.pool(data, kernel, stride, padding, …) Perform pooling on height and width dimension of data.
topi.nn.global_pool(data, pool_type[, layout]) Perform global pooling on height and width dimension of data.
topi.nn.upsampling(data, scale[, layout, method]) Perform upsampling on the data.
topi.nn.softmax(x[, axis]) Perform softmax activation on the data
topi.nn.log_softmax(x) Perform log softmax activation on the data
topi.nn.conv2d_nchw(Input, Filter, stride, …) Convolution operator in NCHW layout.
topi.nn.conv2d_hwcn(Input, Filter, stride, …) Convolution operator in HWCN layout.
topi.nn.depthwise_conv2d_nchw(Input, Filter, …) Depthwise convolution nchw forward operator.
topi.nn.depthwise_conv2d_nhwc(Input, Filter, …) Depthwise convolution nhwc forward operator.
topi.max(data[, axis, keepdims]) Maximum of array elements over a given axis or a list of axes
topi.sum(data[, axis, keepdims]) Sum of array elements over a given axis or a list of axes
topi.min(data[, axis, keepdims]) Minimum of array elements over a given axis or a list of axes
topi.argmax(data[, axis, keepdims]) Returns the indices of the maximum values along an axis.
topi.argmin(data[, axis, keepdims]) Returns the indices of the minimum values along an axis.
topi.prod(data[, axis, keepdims]) Product of array elements over a given axis or a list of axes
topi.broadcast_to(data, shape) Broadcast the src to the target shape
topi.add(lhs, rhs) Addition with auto-broadcasting
topi.subtract(lhs, rhs) Subtraction with auto-broadcasting
topi.multiply(lhs, rhs) Multiplication with auto-broadcasting
topi.divide(lhs, rhs) Division with auto-broadcasting
topi.mod(lhs, rhs) Modulus with auto-broadcasting
topi.maximum(lhs, rhs) Take element-wise maximum of two tensors with auto-broadcasting
topi.minimum(lhs, rhs) Take element-wise maximum of two tensors with auto-broadcasting
topi.power(lhs, rhs) Power with auto-broadcasting
topi.greater(lhs, rhs) Compute (lhs>rhs) with auto-broadcasting
topi.less(lhs, rhs) Compute (lhs<rhs) with auto-broadcasting
topi.equal(lhs, rhs) Compute (lhs==rhs) with auto-broadcasting
topi.not_equal(lhs, rhs) Compute (lhs!=rhs) with auto-broadcasting
topi.greater_equal(lhs, rhs) Compute (lhs>=rhs) with auto-broadcasting
topi.less_equal(lhs, rhs) Compute (lhs<=rhs) with auto-broadcasting
topi.image.resize(data, size[, layout, …]) Perform resize operation on the data.

List of schedules

topi.generic.schedule_conv2d_nchw(outs) Schedule for conv2d_nchw
topi.generic.schedule_depthwise_conv2d_nchw(outs) Schedule for depthwise_conv2d_nchw
topi.generic.schedule_reduce(outs) Schedule for reduction
topi.generic.schedule_broadcast(outs) Schedule for injective op.
topi.generic.schedule_injective(outs) Schedule for injective op.

topi

topi.negative(x)

Take negation of input x.

Parameters:x (tvm.Tensor) – Input argument.
Returns:y – The result.
Return type:tvm.Tensor
topi.identity(x)

Take identity of input x.

Parameters:x (tvm.Tensor) – Input argument.
Returns:y – The result.
Return type:tvm.Tensor
topi.floor(x)

Take floor of input x.

Parameters:x (tvm.Tensor) – Input argument.
Returns:y – The result.
Return type:tvm.Tensor
topi.ceil(x)

Take ceil of input x.

Parameters:x (tvm.Tensor) – Input argument.
Returns:y – The result.
Return type:tvm.Tensor
topi.trunc(x)

Take truncated value of the input of x, element-wise.

Parameters:x (tvm.Tensor) – Input argument.
Returns:y – The result.
Return type:tvm.Tensor
topi.round(x)

Round elements of x to nearest integer.

Parameters:x (tvm.Tensor) – Input argument.
Returns:y – The result.
Return type:tvm.Tensor
topi.abs(x)

Take absolute value of the input of x, element-wise.

Parameters:x (tvm.Tensor) – Input argument.
Returns:y – The result.
Return type:tvm.Tensor
topi.exp(x)

Take exponential of input x.

Parameters:x (tvm.Tensor) – Input argument.
Returns:y – The result.
Return type:tvm.Tensor
topi.tanh(x)

Take hyperbolic tanh of input x.

Parameters:x (tvm.Tensor) – Input argument.
Returns:y – The result.
Return type:tvm.Tensor
topi.log(x)

Take logarithm of input x.

Parameters:x (tvm.Tensor) – Input argument.
Returns:y – The result.
Return type:tvm.Tensor
topi.sqrt(x)

Take square root of input x.

Parameters:x (tvm.Tensor) – Input argument.
Returns:y – The result.
Return type:tvm.Tensor
topi.sigmoid(x)

Take sigmoid tanh of input x.

Parameters:x (tvm.Tensor) – Input argument.
Returns:y – The result.
Return type:tvm.Tensor
topi.clip(x, a_min, a_max)

Clip (limit) the values in an array. Given an interval, values outside the interval are clipped to the interval edges.

Parameters:
  • x (tvm.Tensor) – Input argument.
  • a_min (int or float) – Minimum value.
  • a_max (int or float) – Maximum value.
Returns:

y – The result.

Return type:

tvm.Tensor

topi.cast(x, dtype)

Cast input to specified data type.

Parameters:
  • x (tvm.Tensor or Expr) – Input argument.
  • dtype (str) – Data type.
Returns:

y – The result.

Return type:

tvm.Tensor

topi.transpose(a, axes=None)

Permute the dimensions of an array.

Parameters:
  • a (tvm.Tensor) – The tensor to be expanded.
  • axes (tuple of ints, optional) – By default, reverse the dimensions.
Returns:

ret

Return type:

tvm.Tensor

topi.flip(a, axis=0)

Flip/reverse elements of an array in a particular axis.

Parameters:
  • a (tvm.Tensor) – The tensor to be expanded.
  • axis (int, optional) – The axis along which the tensors will be reveresed.
Returns:

ret

Return type:

tvm.Tensor

topi.strided_slice(a, begin, end, strides=None)

Slice of an array.

Parameters:
  • a (tvm.Tensor) – The tensor to be sliced.
  • begin (list of int) – The indices to begin with in the slicing.
  • end (list of int) – Indicies indicating end of the slice.
  • strides (list of int, optional) – Specifies the stride values, it can be negative in that case, the input tensor will be reversed in that particular axis.
Returns:

ret

Return type:

tvm.Tensor

topi.expand_dims(a, axis, num_newaxis=1)

Expand the shape of an array.

Parameters:
  • a (tvm.Tensor) – The tensor to be expanded.
  • num_newaxis (int, optional) – Number of newaxis to be inserted on axis
Returns:

ret

Return type:

tvm.Tensor

topi.reshape(a, newshape)

Reshape the array

Parameters:
  • a (tvm.Tensor) – The tensor to be reshaped
  • newshape (tuple of ints) – The new shape
Returns:

ret

Return type:

tvm.Tensor

topi.squeeze(a, axis=None)

Remove single-dimensional entries from the shape of an array.

Parameters:
  • a (tvm.Tensor) –
  • axis (None or int or tuple of ints, optional) – Selects a subset of the single-dimensional entries in the shape. If an axis is selected with shape entry greater than one, an error is raised.
Returns:

squeezed

Return type:

tvm.Tensor

topi.concatenate(a_tuple, axis=0)

Join a sequence of arrays along an existing axis.

Parameters:
  • a_tuple (tuple of tvm.Tensor) – The arrays to concatenate
  • axis (int, optional) – The axis along which the arrays will be joined. Default is 0.
Returns:

ret

Return type:

tvm.Tensor

topi.split(ary, indices_or_sections, axis=0)

Split an array into multiple sub-arrays.

Parameters:
  • ary (tvm.Tensor) –
  • indices_or_sections (int or 1-D array) –
  • axis (int) –
Returns:

ret

Return type:

tuple of tvm.Tensor

topi.take(a, indices, axis=None)

Take elements from an array along an axis.

Parameters:
  • a (tvm.Tensor) – The source array.
  • indices (tvm.Tensor) – The indices of the values to extract.
  • axis (int, optional) – The axis over which to select values. By default, the flattened input array is used.
Returns:

ret

Return type:

tvm.Tensor

topi.gather_nd(a, indices)

Gather elements from a n-dimension array..

Parameters:
  • a (tvm.Tensor) – The source array.
  • indices (tvm.Tensor) – The indices of the values to extract.
Returns:

ret

Return type:

tvm.Tensor

topi.full(shape, dtype, fill_value)

Fill tensor with fill_value

Parameters:
  • shape (tuple) – Input tensor shape.
  • dtype (str) – Data type
  • fill_value (float) – Value to be filled
Returns:

y – The result.

Return type:

tvm.Tensor

topi.full_like(x, fill_value)
Construct a tensor with same shape as input tensor,
then fill tensor with fill_value.
Parameters:
  • x (tvm.Tensor) – Input argument.
  • fill_value (float) – Value to be filled
Returns:

y – The result.

Return type:

tvm.Tensor

topi.max(data, axis=None, keepdims=False)

Maximum of array elements over a given axis or a list of axes

Parameters:
  • data (tvm.Tensor) – The input tvm tensor
  • axis (None or int or tuple of int) – Axis or axes along which the max operation is performed. The default, axis=None, will find the max element from all of the elements of the input array. If axis is negative it counts from the last to the first axis.
  • keepdims (bool) – If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array.
Returns:

ret

Return type:

tvm.Tensor

topi.sum(data, axis=None, keepdims=False)

Sum of array elements over a given axis or a list of axes

Parameters:
  • data (tvm.Tensor) – The input tvm tensor
  • axis (None or int or tuple of int) – Axis or axes along which a sum is performed. The default, axis=None, will sum all of the elements of the input array. If axis is negative it counts from the last to the first axis.
  • keepdims (bool) – If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array.
Returns:

ret

Return type:

tvm.Tensor

topi.min(data, axis=None, keepdims=False)

Minimum of array elements over a given axis or a list of axes

Parameters:
  • data (tvm.Tensor) – The input tvm tensor
  • axis (None or int or tuple of int) – Axis or axes along which a minimum operation is performed. The default, axis=None, will find the minimum element from all of the elements of the input array. If axis is negative it counts from the last to the first axis.
  • keepdims (bool) – If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array.
Returns:

ret

Return type:

tvm.Tensor

topi.prod(data, axis=None, keepdims=False)

Product of array elements over a given axis or a list of axes

Parameters:
  • data (tvm.Tensor) – The input tvm tensor
  • axis (None or int or tuple of int) – Axis or axes along which a prod operation is performed. The default, axis=None, will get the prod element over all of the elements of the input array. If axis is negative it counts from the last to the first axis.
  • keepdims (bool) – If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array.
Returns:

ret

Return type:

tvm.Tensor

topi.broadcast_to(data, shape)

Broadcast the src to the target shape

We follows the numpy broadcasting rule. See also https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html

Parameters:
  • data (tvm.Tensor) – The input data
  • shape (list or tuple) – The target shape to be broadcasted.
Returns:

ret

Return type:

tvm.Tensor

topi.add(lhs, rhs)

Addition with auto-broadcasting

Parameters:
  • lhs (tvm.Tensor or Expr) – The left operand
  • rhs (tvm.Tensor or Expr) – The right operand
Returns:

ret – Returns Expr if both operands are Expr. Otherwise returns Tensor.

Return type:

tvm.Tensor or Expr

topi.subtract(lhs, rhs)

Subtraction with auto-broadcasting

Parameters:
  • lhs (tvm.Tensor or Expr) – The left operand
  • rhs (tvm.Tensor or Expr) – The right operand
Returns:

ret – Returns Expr if both operands are Expr. Otherwise returns Tensor.

Return type:

tvm.Tensor or Expr

topi.multiply(lhs, rhs)

Multiplication with auto-broadcasting

Parameters:
  • lhs (tvm.Tensor or Expr) – The left operand
  • rhs (tvm.Tensor or Expr) – The right operand
Returns:

ret – Returns Expr if both operands are Expr. Otherwise returns Tensor.

Return type:

tvm.Tensor or Expr

topi.divide(lhs, rhs)

Division with auto-broadcasting

Parameters:
  • lhs (tvm.Tensor or Expr) – The left operand
  • rhs (tvm.Tensor or Expr) – The right operand
Returns:

ret – Returns Expr if both operands are Expr. Otherwise returns Tensor.

Return type:

tvm.Tensor or Expr

topi.mod(lhs, rhs)

Modulus with auto-broadcasting

Parameters:
  • lhs (tvm.Tensor or Expr) – The left operand
  • rhs (tvm.Tensor or Expr) – The right operand
Returns:

ret – Returns Expr if both operands are Expr. Otherwise returns Tensor.

Return type:

tvm.Tensor or Expr

topi.maximum(lhs, rhs)

Take element-wise maximum of two tensors with auto-broadcasting

Parameters:
  • lhs (tvm.Tensor or Expr) – The left operand
  • rhs (tvm.Tensor or Expr) – The right operand
Returns:

ret – Returns Expr if both operands are Expr. Otherwise returns Tensor.

Return type:

tvm.Tensor or Expr

topi.minimum(lhs, rhs)

Take element-wise maximum of two tensors with auto-broadcasting

Parameters:
  • lhs (tvm.Tensor or Expr) – The left operand
  • rhs (tvm.Tensor or Expr) – The right operand
Returns:

ret – Returns Expr if both operands are Expr. Otherwise returns Tensor.

Return type:

tvm.Tensor or Expr

topi.power(lhs, rhs)

Power with auto-broadcasting

Parameters:
  • lhs (tvm.Tensor or Expr) – The left operand
  • rhs (tvm.Tensor or Expr) – The right operand
Returns:

ret – Returns Expr if both operands are Expr. Otherwise returns Tensor.

Return type:

tvm.Tensor or Expr

topi.greater(lhs, rhs)

Compute (lhs>rhs) with auto-broadcasting

Parameters:
  • lhs (tvm.Tensor or Expr) – The left operand
  • rhs (tvm.Tensor or Expr) – The right operand
Returns:

ret – Returns Expr if both operands are Expr. Otherwise returns Tensor.

Return type:

tvm.Tensor or Expr

topi.less(lhs, rhs)

Compute (lhs<rhs) with auto-broadcasting

Parameters:
  • lhs (tvm.Tensor or Expr) – The left operand
  • rhs (tvm.Tensor or Expr) – The right operand
Returns:

ret – Returns Expr if both operands are Expr. Otherwise returns Tensor.

Return type:

tvm.Tensor or Expr

topi.nn

topi.nn.relu(x)

Take relu of input x.

Parameters:x (tvm.Tensor) – Input argument.
Returns:y – The result.
Return type:tvm.Tensor
topi.nn.leaky_relu(x, alpha)

Take leaky relu of input x.

Parameters:
  • x (tvm.Tensor) – Input argument.
  • alpha (float) – The slope for the small gradient when x < 0
Returns:

y – The result.

Return type:

tvm.Tensor

topi.nn.dilate(data, strides, name='DilatedInput')

Dilate data with zeros.

Parameters:
  • data (tvm.Tensor) – n-D, can be any layout.
  • strides (list / tuple of n ints) – Dilation stride on each dimension, 1 means no dilation.
  • name (str, optional) – The name prefix operators generated
Returns:

Output – n-D, the same layout as data.

Return type:

tvm.Tensor

topi.nn.pool(data, kernel, stride, padding, pool_type, ceil_mode=False, layout='NCHW', count_include_pad=True)
Perform pooling on height and width dimension of data.
It decides the height and width dimension according to the layout string, in which ‘W’ and ‘H’ means width and height respectively. Width and height dimension cannot be split. For example, NCHW, NCHW16c, etc. are valid for pool, while NCHW16w, NCHW16h are not. See parameter layout for more information of the layout string convention.
Parameters:
  • data (tvm.Tensor) – n-D with shape of layout
  • kernel (list/tuple of two ints) – Kernel size, [kernel_height, kernel_width]
  • stride (list/tuple of two ints) – Stride size, [stride_height, stride_width]
  • padding (list/tuple of four ints) – Pad size, [pad_top, pad_left, pad_bottom, pad_right]]
  • pool_type (str) – Pool type, ‘max’ or ‘avg’
  • ceil_mode (bool) – Whether to use ceil when calculating output size.
  • layout (string) – Layout of the input data. The layout is supposed to be composed of upper cases, lower cases and numbers, where upper case indicates a dimension and the corresponding lower case with factor size indicates the split dimension. For example, NCHW16c can describe a 5-D tensor of [batch_size, channel, height, width, channel_block], in which channel_block=16 is a split of dimension channel.
  • count_include_pad (bool) – Whether include padding in the calculation when pool_type is ‘avg’
Returns:

output – n-D in the same layout

Return type:

tvm.Tensor

topi.nn.global_pool(data, pool_type, layout='NCHW')
Perform global pooling on height and width dimension of data.
It decides the height and width dimension according to the layout string, in which ‘W’ and ‘H’ means width and height respectively. Width and height dimension cannot be split. For example, NCHW, NCHW16c, etc. are valid for pool, while NCHW16w, NCHW16h are not. See parameter layout for more information of the layout string convention.
Parameters:
  • data (tvm.Tensor) – n-D with shape of layout
  • pool_type (str) – Pool type, ‘max’ or ‘avg’
  • layout (str) – Layout of the input data. The layout is supposed to be composed of upper cases, lower cases and numbers, where upper case indicates a dimension and the corresponding lower case with factor size indicates the split dimension. For example, NCHW16c can describe a 5-D tensor of [batch_size, channel, height, width, channel_block], in which channel_block=16 is a split of dimension channel.
Returns:

output – n-D in same layout with height and width dimension size of 1. e.g., for NCHW, the output shape will be [batch, channel, 1, 1]

Return type:

tvm.Tensor

topi.nn.upsampling(data, scale, layout='NCHW', method='NEAREST_NEIGHBOR')
Perform upsampling on the data.
Nearest neighbor and bilinear upsampling are supported.
Parameters:
  • inputs (tvm.Tensor) – inputs is a 4-D tensor with shape [batch, channel, in_height, in_width] or [batch, in_height, in_width, channel]
  • scale (int) – Scaling factor
  • layout (string, optional) – either “NCHW” or “NHWC”
  • method ({"BILINEAR", "NEAREST_NEIGHBOR"}) – Method to be used for upsampling.
Returns:

output – 4-D with shape [batch, channel, in_height*scale, in_width*scale] or [batch, in_height*scale, in_width*scale, channel]

Return type:

tvm.Tensor

topi.nn.softmax(x, axis=-1)

Perform softmax activation on the data

Parameters:
  • data (tvm.Tensor) – can be any dimension
  • axis (int) – channel axis
Returns:

output – output shape is the same as input

Return type:

tvm.Tensor

topi.nn.log_softmax(x)

Perform log softmax activation on the data

Parameters:data (tvm.Tensor) – 2-D input data
Returns:output – 2-D output with same shape
Return type:tvm.Tensor
topi.nn.conv2d_nchw(Input, Filter, stride, padding, dilation, out_dtype=None)

Convolution operator in NCHW layout.

Parameters:
  • Input (tvm.Tensor) – 4-D with shape [batch, in_channel, in_height, in_width]
  • Filter (tvm.Tensor) – 4-D with shape [num_filter, in_channel, filter_height, filter_width]
  • stride (int or a list/tuple of two ints) – Stride size, or [stride_height, stride_width]
  • padding (int or str) – Padding size, or [‘VALID’, ‘SAME’]
  • dilation (int or a list/tuple of two ints) – dilation size, or [dilation_height, dilation_width]
Returns:

Output – 4-D with shape [batch, out_channel, out_height, out_width]

Return type:

tvm.Tensor

topi.nn.conv2d_hwcn(Input, Filter, stride, padding, dilation, out_dtype=None)

Convolution operator in HWCN layout.

Parameters:
  • Input (tvm.Tensor) – 4-D with shape [in_height, in_width, in_channel, batch]
  • Filter (tvm.Tensor) – 4-D with shape [filter_height, filter_width, in_channel, num_filter]
  • stride (int or a list/tuple of two ints) – Stride size, or [stride_height, stride_width]
  • padding (int or str) – Padding size, or [‘VALID’, ‘SAME’]
  • dilation (int or a list/tuple of two ints) – dilation size, or [dilation_height, dilation_width]
Returns:

output – 4-D with shape [out_height, out_width, out_channel, batch]

Return type:

tvm.Tensor

topi.nn.depthwise_conv2d_nchw(Input, Filter, stride, padding, dilation, out_dtype=None)

Depthwise convolution nchw forward operator.

Parameters:
  • Input (tvm.Tensor) – 4-D with shape [batch, in_channel, in_height, in_width]
  • Filter (tvm.Tensor) – 4-D with shape [in_channel, channel_multiplier, filter_height, filter_width]
  • stride (tuple of two ints) – The spatial stride along height and width
  • padding (int or str) – Padding size, or [‘VALID’, ‘SAME’]
  • dilation (int or a list/tuple of two ints) – dilation size, or [dilation_height, dilation_width]
  • out_dtype (str, optional) – Output data type
Returns:

Output – 4-D with shape [batch, out_channel, out_height, out_width]

Return type:

tvm.Tensor

topi.nn.depthwise_conv2d_nhwc(Input, Filter, stride, padding, dilation, out_dtype=None)

Depthwise convolution nhwc forward operator.

Parameters:
  • Input (tvm.Tensor) – 4-D with shape [batch, in_height, in_width, in_channel]
  • Filter (tvm.Tensor) – 4-D with shape [filter_height, filter_width, in_channel, channel_multiplier]
  • stride (tuple of two ints) – The spatial stride along height and width
  • padding (int or str) – Padding size, or [‘VALID’, ‘SAME’]
  • dilation (int or a list/tuple of two ints) – dilation size, or [dilation_height, dilation_width]
  • out_dtype (str, optional) – Output data type
Returns:

Output – 4-D with shape [batch, out_height, out_width, out_channel]

Return type:

tvm.Tensor

topi.image

topi.image.resize(data, size, layout='NCHW', align_corners=False, method='BILINEAR')

Perform resize operation on the data.

Parameters:
  • inputs (tvm.Tensor) – inputs is a 4-D tensor with shape [batch, channel, in_height, in_width] or [batch, in_height, in_width, channel]
  • size (Tuple) – Output resolution scale to
  • layout (string, optional) – either “NCHW” or “NHWC”
  • align_corners (Boolean, optional) – To preserve the values at the corner pixels
  • method ({"BILINEAR", "NEAREST_NEIGHBOR"}) – Method to be used for resizing.
Returns:

output – 4-D with shape [batch, channel, in_height*scale, in_width*scale] or [batch, in_height*scale, in_width*scale, channel]

Return type:

tvm.Tensor

topi.generic

Generic declaration and schedules.

This is a recommended way of using TOPI API. To use the generic schedule function, user must set the current target scope using with block. See also tvm.target

Example

# create schedule that dispatches to topi.cuda.schedule_injective
with tvm.target.create("cuda"):
  s = tvm.generic.schedule_injective(outs)
topi.generic.schedule_conv2d_nchw(outs)

Schedule for conv2d_nchw

Parameters:outs (Array of Tensor) – The computation graph description of conv2d_nchw in the format of an array of tensors.
Returns:sch – The computation schedule for the op.
Return type:Schedule
topi.generic.schedule_depthwise_conv2d_nchw(outs)

Schedule for depthwise_conv2d_nchw

Parameters:outs (Array of Tensor) – The computation graph description of depthwise_conv2d_nchw in the format of an array of tensors.
Returns:sch – The computation schedule for the op.
Return type:Schedule
topi.generic.schedule_reduce(outs)

Schedule for reduction

Parameters:outs (Array of Tensor) – The computation graph description of reduce in the format of an array of tensors.
Returns:sch – The computation schedule for the op.
Return type:Schedule
topi.generic.schedule_broadcast(outs)

Schedule for injective op.

Parameters:outs (Array of Tensor) – The computation graph description of reduce in the format of an array of tensors.
Returns:sch – The computation schedule for the op.
Return type:Schedule
topi.generic.schedule_injective(outs)

Schedule for injective op.

Parameters:outs (Array of Tensor) – The computation graph description of reduce in the format of an array of tensors.
Returns:sch – The computation schedule for the op.
Return type:Schedule