.. note:: :class: sphx-glr-download-link-note Click :ref:`here ` to download the full example code .. rst-class:: sphx-glr-example-title .. _sphx_glr_tutorials_language_schedule_primitives.py: Schedule Primitives in TVM ========================== **Author**: `Ziheng Jiang `_ TVM is a domain specific language for efficient kernel construction. In this tutorial, we will show you how to schedule the computation by various primitives provided by TVM. .. code-block:: default from __future__ import absolute_import, print_function import tvm import numpy as np There often exist several methods to compute the same result, however, different methods will result in different locality and performance. So TVM asks user to provide how to execute the computation called **Schedule**. A **Schedule** is a set of transformation of computation that transforms the loop of computations in the program. .. code-block:: default # declare some variables for use later n = tvm.var('n') m = tvm.var('m') A schedule can be created from a list of ops, by default the schedule computes tensor in a serial manner in a row-major order. .. code-block:: default # declare a matrix element-wise multiply A = tvm.placeholder((m, n), name='A') B = tvm.placeholder((m, n), name='B') C = tvm.compute((m, n), lambda i, j: A[i, j] * B[i, j], name='C') s = tvm.create_schedule([C.op]) # lower will transform the computation from definition to the real # callable function. With argument `simple_mode=True`, it will # return you a readable C like statement, we use it here to print the # schedule result. print(tvm.lower(s, [A, B, C], simple_mode=True)) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none produce C { for (i, 0, m) { for (j, 0, n) { C[((i*n) + j)] = (A[((i*n) + j)]*B[((i*n) + j)]) } } } One schedule is composed by multiple stages, and one **Stage** represents schedule for one operation. We provide various methods to schedule every stage. split ----- :code:`split` can split a specified axis into two axises by :code:`factor`. .. code-block:: default A = tvm.placeholder((m,), name='A') B = tvm.compute((m,), lambda i: A[i]*2, name='B') s = tvm.create_schedule(B.op) xo, xi = s[B].split(B.op.axis[0], factor=32) print(tvm.lower(s, [A, B], simple_mode=True)) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none produce B { for (i.outer, 0, ((m + 31)/32)) { for (i.inner, 0, 32) { if (likely((((i.outer*32) + i.inner) < m))) { if (likely((((i.outer*32) + i.inner) < m))) { B[((i.outer*32) + i.inner)] = (A[((i.outer*32) + i.inner)]*2f) } } } } } You can also split a axis by :code:`nparts`, which splits the axis contrary with :code:`factor`. .. code-block:: default A = tvm.placeholder((m,), name='A') B = tvm.compute((m,), lambda i: A[i], name='B') s = tvm.create_schedule(B.op) bx, tx = s[B].split(B.op.axis[0], nparts=32) print(tvm.lower(s, [A, B], simple_mode=True)) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none produce B { for (i.outer, 0, 32) { for (i.inner, 0, ((m + 31)/32)) { if (likely(((i.inner + (i.outer*((m + 31)/32))) < m))) { if (likely((0 <= (i.inner + (i.outer*((m + 31)/32)))))) { if (likely(((i.inner + (i.outer*((m + 31)/32))) < m))) { B[(i.inner + (i.outer*((m + 31)/32)))] = A[(i.inner + (i.outer*((m + 31)/32)))] } } } } } } tile ---- :code:`tile` help you execute the computation tile by tile over two axises. .. code-block:: default A = tvm.placeholder((m, n), name='A') B = tvm.compute((m, n), lambda i, j: A[i, j], name='B') s = tvm.create_schedule(B.op) xo, yo, xi, yi = s[B].tile(B.op.axis[0], B.op.axis[1], x_factor=10, y_factor=5) print(tvm.lower(s, [A, B], simple_mode=True)) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none produce B { for (i.outer, 0, ((m + 9)/10)) { for (j.outer, 0, ((n + 4)/5)) { for (i.inner, 0, 10) { for (j.inner, 0, 5) { if (likely((((i.outer*10) + i.inner) < m))) { if (likely((((j.outer*5) + j.inner) < n))) { if (likely((((i.outer*10) + i.inner) < m))) { if (likely((((j.outer*5) + j.inner) < n))) { B[(((j.outer*5) + (((i.outer*10) + i.inner)*n)) + j.inner)] = A[(((j.outer*5) + (((i.outer*10) + i.inner)*n)) + j.inner)] } } } } } } } } } fuse ---- :code:`fuse` can fuse two consecutive axises of one computation. .. code-block:: default A = tvm.placeholder((m, n), name='A') B = tvm.compute((m, n), lambda i, j: A[i, j], name='B') s = tvm.create_schedule(B.op) # tile to four axises first: (i.outer, j.outer, i.inner, j.inner) xo, yo, xi, yi = s[B].tile(B.op.axis[0], B.op.axis[1], x_factor=10, y_factor=5) # then fuse (i.inner, j.inner) into one axis: (i.inner.j.inner.fused) fused = s[B].fuse(xi, yi) print(tvm.lower(s, [A, B], simple_mode=True)) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none produce B { for (i.outer, 0, ((m + 9)/10)) { for (j.outer, 0, ((n + 4)/5)) { for (i.inner.j.inner.fused, 0, 50) { if (likely((((i.outer*10) + (i.inner.j.inner.fused/5)) < m))) { if (likely((((j.outer*5) + (i.inner.j.inner.fused % 5)) < n))) { if (likely((((i.outer*10) + (i.inner.j.inner.fused/5)) < m))) { if (likely((((j.outer*5) + (i.inner.j.inner.fused % 5)) < n))) { B[(((j.outer*5) + (((i.outer*10) + (i.inner.j.inner.fused/5))*n)) + (i.inner.j.inner.fused % 5))] = A[(((j.outer*5) + (((i.outer*10) + (i.inner.j.inner.fused/5))*n)) + (i.inner.j.inner.fused % 5))] } } } } } } } } reorder ------- :code:`reorder` can reorder the axises in the specified order. .. code-block:: default A = tvm.placeholder((m, n), name='A') B = tvm.compute((m, n), lambda i, j: A[i, j], name='B') s = tvm.create_schedule(B.op) # tile to four axises first: (i.outer, j.outer, i.inner, j.inner) xo, yo, xi, yi = s[B].tile(B.op.axis[0], B.op.axis[1], x_factor=10, y_factor=5) # then reorder the axises: (i.inner, j.outer, i.outer, j.inner) s[B].reorder(xi, yo, xo, yi) print(tvm.lower(s, [A, B], simple_mode=True)) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none produce B { for (i.inner, 0, 10) { for (j.outer, 0, ((n + 4)/5)) { for (i.outer, 0, ((m + 9)/10)) { for (j.inner, 0, 5) { if (likely((((i.outer*10) + i.inner) < m))) { if (likely((((j.outer*5) + j.inner) < n))) { if (likely((((i.outer*10) + i.inner) < m))) { if (likely((((j.outer*5) + j.inner) < n))) { B[(((j.outer*5) + (((i.outer*10) + i.inner)*n)) + j.inner)] = A[(((j.outer*5) + (((i.outer*10) + i.inner)*n)) + j.inner)] } } } } } } } } } bind ---- :code:`bind` can bind a specified axis with a thread axis, often used in gpu programming. .. code-block:: default A = tvm.placeholder((n,), name='A') B = tvm.compute(A.shape, lambda i: A[i] * 2, name='B') s = tvm.create_schedule(B.op) bx, tx = s[B].split(B.op.axis[0], factor=64) s[B].bind(bx, tvm.thread_axis("blockIdx.x")) s[B].bind(tx, tvm.thread_axis("threadIdx.x")) print(tvm.lower(s, [A, B], simple_mode=True)) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none produce B { // attr [iter_var(blockIdx.x, , blockIdx.x)] thread_extent = ((n + 63)/64) // attr [iter_var(threadIdx.x, , threadIdx.x)] thread_extent = 64 if (likely((((blockIdx.x*64) + threadIdx.x) < n))) { if (likely((((blockIdx.x*64) + threadIdx.x) < n))) { B[((blockIdx.x*64) + threadIdx.x)] = (A[((blockIdx.x*64) + threadIdx.x)]*2f) } } } compute_at ---------- For a schedule that consists of multiple operators, TVM will compute tensors at the root separately by default. .. code-block:: default A = tvm.placeholder((m,), name='A') B = tvm.compute((m,), lambda i: A[i]+1, name='B') C = tvm.compute((m,), lambda i: B[i]*2, name='C') s = tvm.create_schedule(C.op) print(tvm.lower(s, [A, B, C], simple_mode=True)) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none produce B { for (i, 0, m) { B[i] = (A[i] + 1f) } } produce C { for (i, 0, m) { C[i] = (B[i]*2f) } } :code:`compute_at` can move computation of `B` into the first axis of computation of `C`. .. code-block:: default A = tvm.placeholder((m,), name='A') B = tvm.compute((m,), lambda i: A[i]+1, name='B') C = tvm.compute((m,), lambda i: B[i]*2, name='C') s = tvm.create_schedule(C.op) s[B].compute_at(s[C], C.op.axis[0]) print(tvm.lower(s, [A, B, C], simple_mode=True)) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none produce C { for (i, 0, m) { produce B { B[i] = (A[i] + 1f) } C[i] = (B[i]*2f) } } compute_inline -------------- :code:`compute_inline` can mark one stage as inline, then the body of computation will be expanded and inserted at the address where the tensor is required. .. code-block:: default A = tvm.placeholder((m,), name='A') B = tvm.compute((m,), lambda i: A[i]+1, name='B') C = tvm.compute((m,), lambda i: B[i]*2, name='C') s = tvm.create_schedule(C.op) s[B].compute_inline() print(tvm.lower(s, [A, B, C], simple_mode=True)) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none produce C { for (i, 0, m) { C[i] = ((A[i] + 1f)*2f) } } compute_root ------------ :code:`compute_root` can move computation of one stage to the root. .. code-block:: default A = tvm.placeholder((m,), name='A') B = tvm.compute((m,), lambda i: A[i]+1, name='B') C = tvm.compute((m,), lambda i: B[i]*2, name='C') s = tvm.create_schedule(C.op) s[B].compute_at(s[C], C.op.axis[0]) s[B].compute_root() print(tvm.lower(s, [A, B, C], simple_mode=True)) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none produce B { for (i, 0, m) { B[i] = (A[i] + 1f) } } produce C { for (i, 0, m) { C[i] = (B[i]*2f) } } Summary ------- This tutorial provides an introduction to schedule primitives in tvm, which permits users schedule the computation easily and flexibly. In order to get a good performance kernel implementation, the general workflow often is: - Describe your computation via series of operations. - Try to schedule the computation with primitives. - Compile and run to see the performance difference. - Adjust your schedule according the running result. .. rst-class:: sphx-glr-timing **Total running time of the script:** ( 0 minutes 0.353 seconds) .. _sphx_glr_download_tutorials_language_schedule_primitives.py: .. only :: html .. container:: sphx-glr-footer :class: sphx-glr-footer-example .. container:: sphx-glr-download :download:`Download Python source code: schedule_primitives.py ` .. container:: sphx-glr-download :download:`Download Jupyter notebook: schedule_primitives.ipynb ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_