In this hands-on workshop you will learn fundamentals of generating high-performance deep-learning models in TensorFlow platform using built-in TensorRT library (TF-TRT) and python. You will learn how to:
- Pre-process classifications models and freeze graphs and weights in order to perform optimization
- Get familiar with fundamentals of graph optimization and quantization using FP32, FP16 and INT8.
- Use TF-TRT API to optimize subgraphs and select optimization parameters that best fit your model.
- Design and embed custom operations in python to mitigate the non-supporting layers problem and optimize detection models.
Upon completion you will learn how to utilize TF-TRT to achieve deployment-ready optimized models.
Experience with TensorFlow and Python
Follow these steps prior to joining the training:
- You must bring your own laptop in order to run the training.
- A current browser is needed. For optimal performance, Chrome, Firefox or Safari for Macs are recommended. IE is operational but does not provide the best performance.
- Create an account at https://courses.nvidia.com/join
- Ensure your laptop will run smoothly by going to http://websocketstest.com/
- Make sure that WebSockets work for you by seeing under Environment, WebSockets is supported and Data Receive, Send and Echo Test all check Yes under WebSockets (Port 80).
- If there are issues with WebSockets, try updating your browser.