TensorFlow Hub is a great collection of models. Any developer looking for ready to use tensorflow models is already familiar with the hub, if not you are missing out on this gem of a collection for TensorFlow models. The models are organized by domain and versions making it very easy for you to find the models that you are looking for. But before we get into the details of how Tiyaro simplifies the problem of how you can simply start using these models, lets go through the normal workflow of a developer trying to use models from TensorFlow Hub.
Just head over to model search on TensorFlow Hub and you can search for the models.
After you find the model of your choice, you download the saved model for it. e.g. If you were looking for an image classification model say imagenet/efficientnet you can download it here
Now that you have a saved model downloaded. You have a couple of options to run it.
TensorFlow serving as an excellent robust solution for running models in Production . It integrates really well with the saved model format and just like every other tensorflow feature you will find tons of tutorials on running Tensorflow serving.
There are many use cases where the additional performance and latency benefits derived from running inference on a GPU is not just a nice to have, it is a must have. This is a nice tutorial on enabling GPU support for tensorflow serving, it includes steps to download the CUDA libraries, re-compiling tensorflow serving with nVidia GPU support to running and testing the GPU support.
This one is easy to understand for developers. If you need to invoke a model you need to know what input(s) that model takes, the format of the input parameters and the output that the model gives. Tensorflow toolchain has done a great job of providing some of the basic tooling required to do this. For instance, you can use the ' saved_model_cli ' to see the default signature supported by the model. e.g. the imagenet efficientnet model has the following signature
$ saved_model_cli show --dir . --tag_set serve --signature_def serving_default
The given SavedModel SignatureDef contains the following input(s):
inputs['input_1'] tensor_info:
dtype: DT_FLOAT
shape: (-1, -1, -1, 3)
name: serving_default_input_1:0
The given SavedModel SignatureDef contains the following output(s):
outputs['output_1'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 1000)
name: StatefulPartitionedCall:0
Method name is: tensorflow/serving/predict
You can use this information in conjunction with the Tensorflow serving documentation to figure out the inputs required by this model and the output generated by it.
Tiyaro allows you to rephrase the question that developers ask. Instead of asking "What do you need to run these models?", developers should be asking
What do you need to use these models? With Tiyaro you just need the following 2 steps
Simply search for the model in Tiyaro console
Click on the search result to see the model card
The model card includes