Relay Core Tensor Operators

This page contains the list of core tensor operator primitives pre-defined in tvm.relay. The core tensor operator primitives covers typical workloads in deep learning. They can represent workloads in front-end frameworks, and provide basic building blocks for optimization. Since deep learning is a fast evolving field and it is that possible to have operators that are not in here.

Note

This document will directly list the function signature of these operators in the python frontend.

Overview of Operators

Level 1: Basic Operators

This level enables fully connected multi-layer perceptron.

tvm.relay.log Compute elementwise log of data.
tvm.relay.sqrt Compute elementwise sqrt of data.
tvm.relay.exp Compute elementwise exp of data.
tvm.relay.sigmoid Compute elementwise sigmoid of data.
tvm.relay.add Addition with numpy-style broadcasting.
tvm.relay.expand_dims Insert num_newaxis axises at the position given by axis.
tvm.relay.concatenate Concatenate the input tensors along the given axis.
tvm.relay.nn.softmax Computes softmax.
tvm.relay.nn.log_softmax Computes log softmax.
tvm.relay.subtract Subtraction with numpy-style broadcasting.
tvm.relay.multiply Multiplication with numpy-style broadcasting.
tvm.relay.divide Division with numpy-style broadcasting.
tvm.relay.mod Mod with numpy-style broadcasting.
tvm.relay.tanh Compute element-wise tanh of data.
tvm.relay.sigmoid Compute elementwise sigmoid of data.
tvm.relay.nn.relu Rectified linear unit.
tvm.relay.nn.dropout Applies the dropout operation to the input array.
tvm.relay.nn.batch_norm Batch normalization layer (Ioffe and Szegedy, 2014).

Level 2: Convolutions

This level enables typical convnet models.

tvm.relay.nn.conv2d 2D convolution.
tvm.relay.nn.conv2d_transpose Two dimensional trnasposed convolution operator.
tvm.relay.nn.dense Dense operator.
tvm.relay.nn.max_pool2d 2D maximum pooling operator.
tvm.relay.nn.avg_pool2d 2D average pooling operator.
tvm.relay.nn.global_max_pool2d 2D global maximum pooling operator.
tvm.relay.nn.global_avg_pool2d 2D global average pooling operator.
tvm.relay.nn.upsampling Upsampling.
tvm.relay.nn.batch_flatten BatchFlatten.
tvm.relay.nn.pad Padding
tvm.relay.nn.lrn This operator takes data as input and does local response normalization.
tvm.relay.nn.l2_normalize Perform L2 normalization on the input data

Level 3: Additional Math And Transform Operators

This level enables additional math and transform operators.

tvm.relay.zeros Fill array with zeros.
tvm.relay.nn.leaky_relu This operator takes data as input and does Leaky version of a Rectified Linear Unit.
tvm.relay.zeros_like Returns an array of zeros, with same type and shape as the input.
tvm.relay.ones Fill array with ones.
tvm.relay.ones_like Returns an array of ones, with same type and shape as the input.
tvm.relay.reshape Reshapes the input array.
tvm.relay.copy Copy a tensor.
tvm.relay.transpose Permutes the dimensions of an array.
tvm.relay.floor Compute element-wise floor of data.
tvm.relay.ceil Compute element-wise ceil of data.
tvm.relay.trunc Compute element-wise trunc of data.
tvm.relay.round Compute element-wise round of data.
tvm.relay.abs Compute element-wise absolute of data.
tvm.relay.negative Compute element-wise negative of data.
tvm.relay.take Take elements from an array along an axis.
tvm.relay.full Fill array with scalar value.
tvm.relay.full_like Return an scalar value array with the same shape and type as the input array.

Level 4: Broadcast and Reductions

tvm.relay.right_shift Right shift with numpy-style broadcasting.
tvm.relay.left_shift Left shift with numpy-style broadcasting.
tvm.relay.equal Broadcasted elementwise test for (lhs == rhs).
tvm.relay.not_equal Broadcasted elementwise test for (lhs != rhs).
tvm.relay.greater Broadcasted elementwise test for (lhs > rhs).
tvm.relay.greater_equal Broadcasted elementwise test for (lhs >= rhs).
tvm.relay.less Broadcasted elementwise test for (lhs < rhs).
tvm.relay.less_equal Broadcasted elementwise test for (lhs <= rhs).
tvm.relay.maximum Maximum with numpy-style broadcasting.
tvm.relay.minimum Minimum with numpy-style broadcasting.
tvm.relay.pow Power with numpy-style broadcasting.
tvm.relay.where Selecting elements from either x or y depending on the value of the condition.

Level 5: Vision/Image Operators

tvm.relay.image.resize Image resize operator.

Level 1 Definitions

tvm.relay.log(data)

Compute elementwise log of data.

Parameters:data (relay.Expr) – The input data
Returns:result – The computed result.
Return type:relay.Expr
tvm.relay.sqrt(data)

Compute elementwise sqrt of data.

Parameters:data (relay.Expr) – The input data
Returns:result – The computed result.
Return type:relay.Expr
tvm.relay.exp(data)

Compute elementwise exp of data.

Parameters:data (relay.Expr) – The input data
Returns:result – The computed result.
Return type:relay.Expr
tvm.relay.sigmoid(data)

Compute elementwise sigmoid of data.

Parameters:data (relay.Expr) – The input data
Returns:result – The computed result.
Return type:relay.Expr
tvm.relay.add(lhs, rhs)

Addition with numpy-style broadcasting.

Parameters:
  • lhs (relay.Expr) – The left hand side input data
  • rhs (relay.Expr) – The right hand side input data
Returns:

result – The computed result.

Return type:

relay.Expr

Examples

x = relay.Var("a") # shape is [2, 3]
y = relay.Var("b") # shape is [2, 1]
z = relay.add(x, y)  # result shape is [2, 3]
tvm.relay.subtract(lhs, rhs)

Subtraction with numpy-style broadcasting.

Parameters:
  • lhs (relay.Expr) – The left hand side input data
  • rhs (relay.Expr) – The right hand side input data
Returns:

result – The computed result.

Return type:

relay.Expr

tvm.relay.multiply(lhs, rhs)

Multiplication with numpy-style broadcasting.

Parameters:
  • lhs (relay.Expr) – The left hand side input data
  • rhs (relay.Expr) – The right hand side input data
Returns:

result – The computed result.

Return type:

relay.Expr

tvm.relay.divide(lhs, rhs)

Division with numpy-style broadcasting.

Parameters:
  • lhs (relay.Expr) – The left hand side input data
  • rhs (relay.Expr) – The right hand side input data
Returns:

result – The computed result.

Return type:

relay.Expr

tvm.relay.mod(lhs, rhs)

Mod with numpy-style broadcasting.

Parameters:
  • lhs (relay.Expr) – The left hand side input data
  • rhs (relay.Expr) – The right hand side input data
Returns:

result – The computed result.

Return type:

relay.Expr

tvm.relay.tanh(data)

Compute element-wise tanh of data.

Parameters:data (relay.Expr) – The input data
Returns:result – The computed result.
Return type:relay.Expr
tvm.relay.sigmoid(data)

Compute elementwise sigmoid of data.

Parameters:data (relay.Expr) – The input data
Returns:result – The computed result.
Return type:relay.Expr
tvm.relay.concatenate(data, axis)

Concatenate the input tensors along the given axis.

Parameters:
  • data (Union(List[relay.Expr], Tuple[relay.Expr])) – A list of tensors.
  • axis (int) – The axis along which the tensors are concatenated.
Returns:

result – The concatenated tensor.

Return type:

relay.Expr

tvm.relay.nn.softmax(data, axis)

Computes softmax.

\[\text{softmax}(x)_i = \frac{exp(x_i)}{\sum_j exp(x_j)}\]

Note

This operator can be optimized away for inference.

Parameters:
  • data (relay.Expr) – The input data to the operator.
  • axis (int) – The axis to sum over when computing softmax
Returns:

result – The computed result.

Return type:

relay.Expr

tvm.relay.nn.log_softmax(data, axis)

Computes log softmax.

\[\text{log_softmax}(x)_i = \log \frac{exp(x_i)}{\sum_j exp(x_j)}\]

Note

This operator can be optimized away for inference.

Parameters:
  • data (relay.Expr) – The input data to the operator.
  • axis (int) – The axis to sum over when computing softmax
Returns:

result – The computed result.

Return type:

relay.Expr

tvm.relay.nn.relu(data)

Rectified linear unit.

\[out = max(x, 0)\]
Parameters:data (relay.Expr) – The input data
Returns:result – The computed result.
Return type:relay.Expr

Level 2 Definitions

tvm.relay.nn.conv2d(data, weight, strides=(1, 1), padding=(0, 0), dilation=(1, 1), groups=1, channels=None, kernel_size=None, data_layout='NCHW', weight_layout='OIHW', out_layout='', out_dtype='')

2D convolution.

This operator takes the weight as the convolution kernel and convolves it with data to produce an output.

In the default case, where the data_layout is NCHW and weight_layout is OIHW, conv2d takes in a data Tensor with shape (batch_size, in_channels, height, width), and a weight Tensor with shape (channels, in_channels, kernel_size[0], kernel_size[1]) to produce an output Tensor with the following rule:

\[\mbox{out}[b, c, y, x] = \sum_{dy, dx, k} \mbox{data}[b, k, \mbox{strides}[0] * y + dy, \mbox{strides}[1] * x + dx] * \mbox{weight}[c, k, dy, dx]\]

Padding and dilation are applied to data and weight respectively before the computation. This operator accepts data layout specification. Semantically, the operator will convert the layout to the canonical layout (NCHW for data and OIHW for weight), perform the computation, then convert to the out_layout.

Parameters:
  • data (relay.Expr) – The input data to the operator.
  • weight (relay.Expr) – The weight expressions.
  • strides (tuple of int, optional) – The strides of convoltution.
  • padding (tuple of int, optional) – The padding of convolution on both sides of inputs before convolution.
  • dilation (tuple of int, optional) – Specifies the dilation rate to be used for dilated convolution.
  • groups (int, optional) – Number of groups for grouped convolution.
  • channels (int, optional) – Number of output channels of this convolution.
  • kernel_size (tuple of int, optional) – The spatial of the convolution kernel.
  • data_layout (str, optional) – Layout of the input.
  • weight_layout (str, optional) – Layout of the weight.
  • out_layout (str, optional) – Layout of the output, by default, out_layout is the same as data_layout
  • out_dtype (str, optional) – Specifies the output data type for mixed precision conv2d.
Returns:

result – The computed result.

Return type:

relay.Expr

tvm.relay.nn.conv2d_transpose(data, weight, strides=(1, 1), padding=(0, 0), dilation=(1, 1), groups=1, channels=None, kernel_size=None, data_layout='NCHW', weight_layout='OIHW', output_padding=(0, 0), out_dtype='')

Two dimensional trnasposed convolution operator.

Parameters:
  • data (relay.Expr) – The input data to the operator.
  • weight (relay.Expr) – The weight expressions.
  • strides (Tuple[int], optional) – The strides of convoltution.
  • padding (Tuple[int], optional) – The padding of convolution on both sides of inputs.
  • dilation (Tuple[int], optional) – Specifies the dilation rate to be used for dilated convolution.
  • groups (int, optional) – Number of groups for grouped convolution.
  • data_layout (str, optional) – Layout of the input.
  • weight_layout (str, optional) – Layout of the weight.
  • output_padding (Tuple[int], optional) – Additional zero-padding to be added to one side of the output.
  • out_dtype (str, optional) – Specifies the output data type for mixed precision conv2d.
Returns:

result – The computed result.

Return type:

relay.Expr

tvm.relay.nn.dense(data, weight, units=None)

Dense operator. Applies a linear transformation

\[\]

Y = X * W

Parameters:
  • data (relay.Expr) – The input data to the operator.
  • weight (relay.Expr) – The weight expressions.
  • units (int, optional) – Number of hidden units of the dense transformation.
Returns:

result – The computed result.

Return type:

relay.Expr

tvm.relay.nn.max_pool2d(data, pool_size=(1, 1), strides=(1, 1), padding=(0, 0), layout='NCHW', ceil_mode=False)

2D maximum pooling operator.

This operator takes data as input and does 2D max value calculation with in pool_size sized window by striding defined by stride

In the default case, where the data_layout is NCHW a data Tensor with shape (batch_size, in_channels, height, width), to produce an output Tensor with the following rule:

with data of shape (b, c, h, w) and pool_size (kh, kw)

\[\mbox{out}(b, c, y, x) = \max_{m=0, \ldots, kh-1} \max_{n=0, \ldots, kw-1} \mbox{data}(b, c, \mbox{stride}[0] * y + m, \mbox{stride}[1] * x + n)\]

Padding is applied to data before the computation. ceil_mode is used to take ceil or floor while computing out shape. This operator accepts data layout specification.

Parameters:
  • data (relay.Expr) – The input data to the operator.
  • strides (tuple of int, optional) – The strides of pooling.
  • padding (tuple of int, optional) – The padding for pooling.
  • layout (str, optional) – Layout of the input.
  • ceil_mode (bool, optional) – To enable or disable ceil while pooling.
Returns:

result – The computed result.

Return type:

relay.Expr

tvm.relay.nn.avg_pool2d(data, pool_size=(1, 1), strides=(1, 1), padding=(0, 0), layout='NCHW', ceil_mode=False, count_include_pad=False)

2D average pooling operator.

This operator takes data as input and does 2D average value calculation with in pool_size sized window by striding defined by stride

In the default case, where the data_layout is NCHW a data Tensor with shape (batch_size, in_channels, height, width), to produce an output Tensor with the following rule:

with data of shape (b, c, h, w), pool_size (kh, kw)

\[\mbox{out}(b, c, y, x) = \frac{1}{kh * kw} \sum_{m=0}^{kh-1} \sum_{n=0}^{kw-1} \mbox{data}(b, c, \mbox{stride}[0] * y + m, \mbox{stride}[1] * x + n)\]

Padding is applied to data before the computation. ceil_mode is used to take ceil or floor while computing out shape. count_include_pad indicates including or excluding padded input values in computation. This operator accepts data layout specification.

Parameters:
  • data (relay.Expr) – The input data to the operator.
  • strides (tuple of int, optional) – The strides of pooling.
  • padding (tuple of int, optional) – The padding for pooling.
  • layout (str, optional) – Layout of the input.
  • ceil_mode (bool, optional) – To enable or disable ceil while pooling.
  • count_include_pad (bool, optional) – To include padding to compute the average.
Returns:

result – The computed result.

Return type:

relay.Expr

tvm.relay.nn.global_max_pool2d(data, layout='NCHW')

2D global maximum pooling operator.

This operator takes data as input and does 2D max value calculation across each window represented by WxH.

In the default case, where the data_layout is NCHW a data Tensor with shape (batch_size, in_channels, height, width), to produce an output Tensor with the following rule:

with data of shape (b, c, h, w)

\[\mbox{out}(b, c, 1, 1) = \max_{m=0, \ldots, h} \max_{n=0, \ldots, w} \mbox{data}(b, c, m, n)\]
Parameters:
  • data (relay.Expr) – The input data to the operator.
  • layout (str, optional) – Layout of the input.
Returns:

result – The computed result.

Return type:

relay.Expr

tvm.relay.nn.global_avg_pool2d(data, layout='NCHW')

2D global average pooling operator.

This operator takes data as input and does 2D average value calculation across each window represented by WxH.

In the default case, where the data_layout is NCHW a data Tensor with shape (batch_size, in_channels, height, width), to produce an output Tensor with the following rule:

with data of shape (b, c, h, w)

\[\mbox{out}(b, c, 1, 1) = \frac{1}{h * w} \sum_{m=0}^{h-1} \sum_{n=0}^{w-1} \mbox{data}(b, c, m, n)\]
Parameters:
  • data (relay.Expr) – The input data to the operator.
  • layout (str, optional) – Layout of the input.
Returns:

result – The computed result.

Return type:

relay.Expr

tvm.relay.nn.upsampling(data, scale=1, layout='NCHW', method='NEAREST_NEIGHBOR')

Upsampling.

This operator takes data as input and does 2D scaling to the given scale factor. In the default case, where the data_layout is NCHW with data of shape (n, c, h, w) out will have a shape (n, c, h*scale, w*scale)

method indicates the algorithm to be used while calculating ghe out value and method can be one of (“BILINEAR”, “NEAREST_NEIGHBOR”)

Parameters:
  • data (relay.Expr) – The input data to the operator.
  • scale (relay.Expr) – The scale factor for upsampling.
  • layout (str, optional) – Layout of the input.
  • method (str, optional) – Scale method to used [NEAREST_NEIGHBOR, BILINEAR].
Returns:

result – The computed result.

Return type:

relay.Expr

tvm.relay.nn.batch_flatten(data)

BatchFlatten.

This operator flattens all the dimensions except for the batch dimension. which results a 2D output.

For data with shape (d1, d2, ..., dk) batch_flatten(data) returns reshaped output of shape (d1, d2*...*dk).

Parameters:data (relay.Expr) – The input data to the operator.
Returns:result – The Flattened result.
Return type:relay.Expr
tvm.relay.nn.lrn(data, size=5, axis=1, bias=2, alpha=1e-05, beta=0.75)

This operator takes data as input and does local response normalization.

Normalize the input in a local region across or within feature maps. Each input value is divided by (data / (bias + (alpha * sum_data ^2 /size))^beta) where n is the size of each local region, and the sum is taken over the region centered at that value (zero padding is added where necessary).

\[(data / (bias + (alpha * sum_data ^2 /size))^beta)\]
Parameters:
  • data (relay.Expr) – The input data to the operator.
  • size (int, optional) – The size of the local region to be considered for normalization.
  • axis (int, optional) – Input data layout channel axis. Default value is 1 for NCHW format
  • bias (float, optional) – The offset parameter to avoid dividing by 0.
  • alpha (float, optional) – The scaling parameter.
  • beta (float, optional) – The exponent parameter.
Returns:

result – The computed result.

Return type:

relay.Expr

tvm.relay.nn.l2_normalize(data, eps, axis=None)

Perform L2 normalization on the input data

\[y(i, j) = x(i, j) / sqrt(max(sum(x^2), eps))\]
Parameters:
  • data (relay.Expr) – The input data to the operator.
  • eps (float) – epsilon value
  • axis (list of int, optional) – axis over the normalization applied
Returns:

result – The computed result.

Return type:

relay.Expr

Level 3 Definitions

tvm.relay.nn.leaky_relu(data, alpha)

This operator takes data as input and does Leaky version of a Rectified Linear Unit.

\[`y = x > 0 ? x : alpha * x`\]
Parameters:
  • data (relay.Expr) – The input data to the operator.
  • alpha (float) – Slope coefficient for the negative half axis.
Returns:

result – The computed result.

Return type:

relay.Expr

tvm.relay.floor(data)

Compute element-wise floor of data.

Parameters:data (relay.Expr) – The input data
Returns:result – The computed result.
Return type:relay.Expr
tvm.relay.ceil(data)

Compute element-wise ceil of data.

Parameters:data (relay.Expr) – The input data
Returns:result – The computed result.
Return type:relay.Expr
tvm.relay.trunc(data)

Compute element-wise trunc of data.

Parameters:data (relay.Expr) – The input data
Returns:result – The computed result.
Return type:relay.Expr
tvm.relay.round(data)

Compute element-wise round of data.

Parameters:data (relay.Expr) – The input data
Returns:result – The computed result.
Return type:relay.Expr
tvm.relay.abs(data)

Compute element-wise absolute of data.

Parameters:data (relay.Expr) – The input data
Returns:result – The computed result.
Return type:relay.Expr
tvm.relay.negative(data)

Compute element-wise negative of data.

Parameters:data (relay.Expr) – The input data
Returns:result – The computed result.
Return type:relay.Expr
tvm.relay.reshape(data, newshape)

Reshapes the input array.

Example:

To give user more convenience in without doing manual shape inference, some dimensions of the shape can take special values from the set {0, -1, -2, -3, -4}. The significance of each is explained below:

  • 0 copy this dimension from the input to the output shape.

Example:

- data.shape = (2,3,4), newshape = (4,0,2), result.shape = (4,3,2)
- data.shape = (2,3,4), newshape = (2,0,0), result.shape = (2,3,4)
  • -1 infers the dimension of the output shape by using the remainder of the input dimensions

keeping the size of the new array same as that of the input array. At most one dimension of shape can be -1.

Example:

- data.shape = (2,3,4), newshape = (6,1,-1), result.shape = (6,1,4)
- data.shape = (2,3,4), newshape = (3,-1,8), result.shape = (3,1,8)
- data.shape = (2,3,4), newshape = (-1,), result.shape = (24,)
  • -2 copy all/remainder of the input dimensions to the output shape.

Example:

- data.shape = (2,3,4), newshape = (-2,), result.shape = (2,3,4)
- data.shape = (2,3,4), newshape = (2,-2), result.shape = (2,3,4)
- data.shape = (2,3,4), newshape = (-2,1,1), result.shape = (2,3,4,1,1)
  • -3 use the product of two consecutive dimensions of the input shape

as the output dimension.

Example:

- data.shape = (2,3,4), newshape = (-3,4), result.shape = (6,4)
- data.shape = (2,3,4,5), newshape = (-3,-3), result.shape = (6,20)
- data.shape = (2,3,4), newshape = (0,-3), result.shape = (2,12)
- data.shape = (2,3,4), newshape = (-3,-2), result.shape = (6,4)
  • -4 split one dimension of the input into two dimensions passed subsequent

to -4 in shape (can contain -1).

Example:

- data.shape = (2,3,4), newshape = (-4,1,2,-2), result.shape =(1,2,3,4)
- data.shape = (2,3,4), newshape = (2,-4,-1,3,-2), result.shape = (2,1,3,4)
Parameters:
  • data (relay.Expr) – The input data to the operator.
  • newshape (Union[int, Tuple[int], List[int]]) – The new shape. Should be compatible with the original shape.
Returns:

result – The reshaped result.

Return type:

relay.Expr

tvm.relay.copy(data)

Copy a tensor.

Parameters:data (relay.Expr) – The tensor to be copied.
Returns:result – The copied result.
Return type:relay.Expr
tvm.relay.transpose(data, axes=None)

Permutes the dimensions of an array.

Parameters:
  • data (relay.Expr) – The input data to the operator.
  • axes (None or List[int]) – The target axes order, reverse order if not specified.
Returns:

result – The transposed result.

Return type:

relay.Expr

tvm.relay.take(data, indices, axis=None)

Take elements from an array along an axis.

Parameters:
  • a (relay.Expr) – The source array.
  • indices (rely.Expr) – The indices of the values to extract.
  • axis (int, optional) – The axis over which to select values. By default, the flattened input array is used.
Returns:

ret – The computed result.

Return type:

relay.Expr

tvm.relay.zeros(shape, dtype)

Fill array with zeros.

Parameters:
  • shape (tuple of int) – The shape of the target.
  • dtype (data type) – The data type of the target.
Returns:

result – The resulting tensor.

Return type:

relay.Expr

tvm.relay.zeros_like(data)

Returns an array of zeros, with same type and shape as the input.

Parameters:data (relay.Expr) – The input data
Returns:result – The computed result.
Return type:relay.Expr
tvm.relay.ones(shape, dtype)

Fill array with ones.

Parameters:
  • shape (tuple of int) – The shape of the target.
  • dtype (data type) – The data type of the target.
Returns:

result – The resulting tensor.

Return type:

relay.Expr

tvm.relay.ones_like(data)

Returns an array of ones, with same type and shape as the input.

Parameters:data (relay.Expr) – The input data
Returns:result – The computed result.
Return type:relay.Expr

Level 4 Definitions

tvm.relay.right_shift(lhs, rhs)

Right shift with numpy-style broadcasting.

Parameters:
  • lhs (relay.Expr) – The left hand side input data
  • rhs (relay.Expr) – The right hand side input data
Returns:

result – The computed result.

Return type:

relay.Expr

tvm.relay.left_shift(lhs, rhs)

Left shift with numpy-style broadcasting.

Parameters:
  • lhs (relay.Expr) – The left hand side input data
  • rhs (relay.Expr) – The right hand side input data
Returns:

result – The computed result.

Return type:

relay.Expr

tvm.relay.equal(lhs, rhs)

Broadcasted elementwise test for (lhs == rhs).

Parameters:
  • lhs (relay.Expr) – The left hand side input data
  • rhs (relay.Expr) – The right hand side input data
Returns:

result – The computed result.

Return type:

relay.Expr

tvm.relay.not_equal(lhs, rhs)

Broadcasted elementwise test for (lhs != rhs).

Parameters:
  • lhs (relay.Expr) – The left hand side input data
  • rhs (relay.Expr) – The right hand side input data
Returns:

result – The computed result.

Return type:

relay.Expr

tvm.relay.greater(lhs, rhs)

Broadcasted elementwise test for (lhs > rhs).

Parameters:
  • lhs (relay.Expr) – The left hand side input data
  • rhs (relay.Expr) – The right hand side input data
Returns:

result – The computed result.

Return type:

relay.Expr

tvm.relay.greater_equal(lhs, rhs)

Broadcasted elementwise test for (lhs >= rhs).

Parameters:
  • lhs (relay.Expr) – The left hand side input data
  • rhs (relay.Expr) – The right hand side input data
Returns:

result – The computed result.

Return type:

relay.Expr

tvm.relay.less(lhs, rhs)

Broadcasted elementwise test for (lhs < rhs).

Parameters:
  • lhs (relay.Expr) – The left hand side input data
  • rhs (relay.Expr) – The right hand side input data
Returns:

result – The computed result.

Return type:

relay.Expr

tvm.relay.less_equal(lhs, rhs)

Broadcasted elementwise test for (lhs <= rhs).

Parameters:
  • lhs (relay.Expr) – The left hand side input data
  • rhs (relay.Expr) – The right hand side input data
Returns:

result – The computed result.

Return type:

relay.Expr

tvm.relay.maximum(lhs, rhs)

Maximum with numpy-style broadcasting.

Parameters:
  • lhs (relay.Expr) – The left hand side input data
  • rhs (relay.Expr) – The right hand side input data
Returns:

result – The computed result.

Return type:

relay.Expr

tvm.relay.minimum(lhs, rhs)

Minimum with numpy-style broadcasting.

Parameters:
  • lhs (relay.Expr) – The left hand side input data
  • rhs (relay.Expr) – The right hand side input data
Returns:

result – The computed result.

Return type:

relay.Expr

tvm.relay.pow(lhs, rhs)

Power with numpy-style broadcasting.

Parameters:
  • lhs (relay.Expr) – The left hand side input data
  • rhs (relay.Expr) – The right hand side input data
Returns:

result – The computed result.

Return type:

relay.Expr

tvm.relay.where(condition, x, y)

Selecting elements from either x or y depending on the value of the condition.

Parameters:
  • condition (relay.Expr) – The condition array. The n-th element in y is selected when the n-th value in the condition array is zero. Otherwise, the corresponding element from x will be picked.
  • x (relay.Expr) – The first array to be selected.
  • y (relay.Expr) – The second array to be selected.
Returns:

result – The selected array.

Return type:

relay.Expr

Examples

x = [[1, 2], [3, 4]]
y = [[5, 6], [7, 8]]
condition = [[0, 1], [-1, 0]]
relay.where(conditon, x, y) = [[5, 2], [3, 8]]

condition = [1, 0]
relay.where(conditon, x, y) = [[1, 2], [7, 8]]

Note that the shape of condition, x, and y needs to be the same.

Level 5 Definitions

tvm.relay.image.resize(data, size, layout='NCHW', method='BILINEAR', align_corners=False)

Image resize operator.

This operator takes data as input and does 2D scaling to the given scale factor. In the default case, where the data_layout is NCHW with data of shape (n, c, h, w) out will have a shape (n, c, size[0], size[1])

method indicates the algorithm to be used while calculating ghe out value and method can be one of (“BILINEAR”, “NEAREST_NEIGHBOR”)

Parameters:
  • data (relay.Expr) – The input data to the operator.
  • size (Tuple of Expr) – The out size to which the image will be resized.
  • layout (str, optional) – Layout of the input.
  • method (str, optional) – Scale method to used [NEAREST_NEIGHBOR, BILINEAR].
  • align_corners (int, optional) – Should be true to preserve the values at the corner pixels
Returns:

result – The resized result.

Return type:

relay.Expr