BSIMM14 Report: Application Security Automation Soars
Flex Logix® Technologies, Inc., a leading innovator in DSP & AI inference IP and the leading supplier of eFPGA IP, announced today the availability of InferX™ IP & software for DSP and AI inference. InferX joins EFLX® eFPGA as Flex Logix’s second IP offering. It can be used by device manufacturers and systems companies that want the performance of a DSP-FPGA or a AI-GPU in their SoC, but at a fraction of the cost and power. The company’s EFLX eFPGA product line has already been proven in dozens of chips with many more in design from 180nm to 7nm with 5nm in development.
“By integrating InferX into an SoC, customers not only maintain the performance and programmability of an expensive and power-hungry FPGA or GPU, but they also benefit from much lower power consumption and cost,” said Geoff Tate, Founder and CEO of Flex Logix. “This is a significant advantage to systems customers that are designing their own ASICs, as well as chip companies that have traditionally had the DSP-FPGA or AI-GPU sitting next to their chip and can now integrate it to get more revenue and save their customer power and cost. InferX is 80% hard-wired, but 100% reconfigurable.”
The end user benefit is more powerful DSP and AI in smaller systems, lower power and lower cost. With InferX AI, users can process megapixel images with much more accurate models like Yolov5s6 and Yolov5L6 to detect images at smaller sizes/greater distances than is affordable now.
The InferX Advantage
InferX DSP is InferX hardware combined with Softlogic for DSP operations, which Flex Logix provides for operations such as FFT that is on-the-fly switchable between sizes (e.g. 1K to 4K to 2K); FIR filters of any number of taps; Complex Matrix Inversions 16×16 or 32×32 or other size; and many more. InferX DSP streams Gigasamples/second, can run multiple DSP operations, and DSP operations can be chained. DSP is done on Real/Complex INT16 with 32-bit accumulation for very high accuracy. With InferX DSP you can integrate DSP performance that is as fast or faster than the leading FPGA at 1/10th of the cost and power, while keeping all of the flexibility to reconfigure almost instantly. One example is InferX DSP with <50 square millimeters of silicon in N5 can do Complex INT16 FFTs at 68 Gigasamples/second and switch instantly between FFT sizes from 256 to 8K points. This is faster than the best FPGA available today at a fraction of the cost, power and size.
InferX AI is InferX hardware combined with the Inference Compiler for AI Inference. Inference Compiler takes in a customer’s neural network model in Pytorch, Onnx or TFLite formats, quantizes the model with high accuracy, compiles the graph for high utilization and generates the run time code that executes on the InferX hardware. A simple, easy-to-use API is provided to control the InferX IP. With InferX AI, customers can integrate AI Inference performance that is as fast or faster than the leading edge AI modules at 1/10th of the cost and power, while keeping all of the flexibility and the ability to run multiple models or change models on the fly. InferX AI is optimized for megapixel batch=1 operations, and the inference compiler is available for evaluation. As an example, with about 15 square millimeters of silicon in N7, InferX AI can run Yolov5s at 175 Inferences/second: this is 40% faster than the fastest edge AI module, Orin AGX 60W.
InferX technology is proven in 16nm and production qualified and will be available in the most popular FinFet nodes.
InferX hardware is also scalable. Its building block is a compute tile that can be arrayed for more throughput. For example, a 4 tile array is 4x the performance of a 1 tile array. The InferX array with the performance the customer wants is delivered with an AXI bus interface for easy integration in their SoC.