AI Inference with Intel® FPGA AI Suite
Mr. Kevin Drake
Application Engineer Intel Corporation
Cerent Engineering Science Complex, Salazar Hall 2009A
4:00 PM
Abstract: Intel® FPGAs enable real-time, low-latency, and low-power deep learning inference combined with the following advantages:
- I/O flexibility
- Reconfiguration
- Ease of integration into custom platforms
- Long lifetime
Intel® FPGA AI Suite was developed with the vision of ease-of-use of artificial intelligence (AI) inference on Intel® FPGAs. The suite enables FPGA designers, machine learning engineers, and software developers to create optimized FPGA AI platforms efficiently.
Utilities in the Intel FPGA AI Suite speed up FPGA development for AI inference using familiar and popular industry frameworks such as TensorFlow* or PyTorch* and OpenVINO toolkit, while also leveraging robust and proven FPGA development flows with the Intel Quartus Prime Software. The Intel® FPGA AI Suite tool flow works with the OpenVINO toolkit, an open-source project to optimize inference on a variety of hardware architectures. The OpenVINO toolkit takes Deep Learning models from all the major Deep Learning frameworks (such as TensorFlow, PyTorch, Keras*) and optimizes them for inference on a variety of hardware architectures, including various CPUs, CPU+GPU, and FPGAs.
Bio: Kevin is an Application Engineer at Intel Corporation. He holds a bachelor’s degree in computer science from Sonoma State University and graduated in 2021. He is currently pursuing his master’s degree in data science at Worcester Polytechnic Institute. At Intel, Kevin works with AI Inference and Acceleration using FPGAs and other ASIC devices. He has also worked in the Intel FPGA University Program to help students learn FPGA design and development in a healthy and equitable environment. Prior to Intel, Kevin was an education and training manager for the US Air Force. During this time, he managed 14 career fields’ training and operational readiness.