Compile Keras Models

Author: Yuwei Hu

This article is an introductory tutorial to deploy keras models with NNVM.

For us to begin with, keras should be installed. Tensorflow is also required since it’s used as the default backend of keras.

A quick solution is to install via pip

pip install -U keras --user
pip install -U tensorflow --user

or please refer to official site

import nnvm
import tvm
import keras
import numpy as np

def download(url, path, overwrite=False):
    import os
    if os.path.isfile(path) and not overwrite:
        print('File {} exists, skip.'.format(path))
    print('Downloading from url {} to {}'.format(url, path))
        import urllib.request
        urllib.request.urlretrieve(url, path)
        import urllib
        urllib.urlretrieve(url, path)

Load pretrained keras model

We load a pretrained resnet-50 classification model provided by keras.

weights_url = ''.join(['',
weights_file = 'resnet50_weights.h5'
download(weights_url, weights_file)
keras_resnet50 = keras.applications.resnet50.ResNet50(include_top=True, weights=None,
                                                      input_shape=(224, 224, 3), classes=1000)


File resnet50_weights.h5 exists, skip.

Load a test image

A single cat dominates the examples!

from PIL import Image
from matplotlib import pyplot as plt
from keras.applications.resnet50 import preprocess_input
img_url = ''
download(img_url, 'cat.png')
img ='cat.png').resize((224, 224))
# input preprocess
data = np.array(img)[np.newaxis, :].astype('float32')
data = preprocess_input(data).transpose([0, 3, 1, 2])
print('input_1', data.shape)


File cat.png exists, skip.
input_1 (1, 3, 224, 224)

Compile the model on NNVM

We should be familiar with the process now.

# convert the keras model(NHWC layout) to NNVM format(NCHW layout).
sym, params = nnvm.frontend.from_keras(keras_resnet50)
# compile the model
target = 'cuda'
shape_dict = {'input_1': data.shape}
with nnvm.compiler.build_config(opt_level=3):
    graph, lib, params =, target, shape_dict, params=params)

Execute on TVM

The process is no different from other examples.

from tvm.contrib import graph_runtime
ctx = tvm.gpu(0)
m = graph_runtime.create(graph, lib, ctx)
# set inputs
m.set_input('input_1', tvm.nd.array(data.astype('float32')))
# execute
# get outputs
tvm_out = m.get_output(0)
top1_tvm = np.argmax(tvm_out.asnumpy()[0])

Look up synset name

Look up prediction top 1 index in 1000 class synset.

synset_url = ''.join(['',
synset_name = 'synset.txt'
download(synset_url, synset_name)
with open(synset_name) as f:
    synset = eval(
print('NNVM top-1 id: {}, class name: {}'.format(top1_tvm, synset[top1_tvm]))
# confirm correctness with keras output
keras_out = keras_resnet50.predict(data.transpose([0, 2, 3, 1]))
top1_keras = np.argmax(keras_out)
print('Keras top-1 id: {}, class name: {}'.format(top1_keras, synset[top1_keras]))


File synset.txt exists, skip.
NNVM top-1 id: 278, class name: kit fox, Vulpes macrotis
Keras top-1 id: 278, class name: kit fox, Vulpes macrotis

Total running time of the script: ( 0 minutes 23.834 seconds)

Gallery generated by Sphinx-Gallery