Global Technical Service E-mail:serversupport@银河赌城.com Europe Technical Service E-mail:eu@银河赌城.com

Inspur Open-Sources TF2, a Full-Stack FPGA-Based Deep Learning Inference Engine

2019-09-20

Highlights

1. Inspur has announced the open-source release of TF2, the world's first FPGA-based AI framework that contains comprehensive solutions ranging from model pruning, compression, quantization, and a general DNN inference computing architecture based on FPGA; the open source project can be found on https://github.com/TF2-Engine/TF2.

2. The community will promote open-source cooperation and development of AI technology based on customizable FPGAs, reducing the barriers to high-performance AI computing technology.

San Jose, California, September 17 - Inspur has announced the open-source release of TF2, an FPGA-based efficient AI computing framework. The inference engine of this framework employs the world’s first DNN shift computing technology, combined with a number of the latest optimization techniques, to achieve FPGA-based high-performance low-latency deployment of universal deep learning models. This is also the world’s first open-sourced FPGA-based AI framework that contains comprehensive solutions ranging from model pruning, compression, quantization, and a general DNN inference computing architecture based on FPGA. The open source project can be found at https://github.com/TF2-Engine/TF2. Many companies and research institutions, such as Kuaishou, Shanghai University, and MGI, are said to have joined the TF2 open source community, which will jointly promote open-source cooperation and the development of AI technology based on customizable FPGAs, reducing the barriers to high-performance AI computing technology, and shortening development cycles for AI users and developers.

At present, customizable FPGA technology with low latency and a high performance-to-power ratio has become the choice of many AI users for deploying their inference applications. However, the high level of difficulty and long cycle involved in FPGA development poses difficulties in adapting to the fast iterative application requirements of deep learning algorithms. TF2 is able to quickly implement FPGA inference based on mainstream AI training software and the deep neural network (DNN) model, enabling users to maximize FPGA computing power and achieve the high-performance and low-latency deployment of FPGAs. At the same time, chip-level AI design and performance verification can also be carried out quickly with the TF2 computing architecture.

TF2 consists of two parts. The first part is the model optimization and conversion tool TF2 Transform Kit, which can conduct compression, pruning, and 8-bit quantization of network model data trained by frameworks such as PyTorch, TensorFlow and Caffe, thereby reducing the amount of model calculations. For example, by compressing a 32-bit floating point model into a 4-bit integer model and pruning the channel, a ResNet50 model can be pruned by 93.75% with virtually no precision loss while maintaining the basic computational architecture of the original model. The second part is the FPGA intelligent running engine TF2 Runtime Engine, which can automatically convert optimized model files into FPGA target running files, and greatly improve the performance and effectively reduce the actual operating power consumption of FPGA inference calculations through the innovative DNN shift computing technology. Testing and verification of TF2 have been completed on mainstream DNN models such as ResNet50, FaceNet, GoogLeNet, and SqueezeNet. A TF2 test using the FaceNet model on the Inspur F10A FPGA card (Batch Size=1) shows that the calculation time of a single image with TF2 is 0.612 ms, an increase in speed of 12.8 times.

At the same time, Inspur’s open source project also includes TF2’s software-defined reconfigurable chip design architecture, which fully supports the development of current CNN network models and can be quickly ported to support the development of network models such as Transformer and LSTM. Based on this architecture, the prototype design for ASIC chip development can be further implemented.

According to Inspur’s plans to establish an open source community for TF2, the company will continuously invest in updating TF2 by developing new functions such as open source automatic model analysis, structural pruning, arbitrary bit quantization, and AutoML-based pruning and quantization, as well as supporting sparse computing, the Transformer network model, and the NLP general model. Furthermore, the community will hold developer conferences and online open classes on a regular basis to share the latest technological advances and experience, train developers through college education programs, and develop user migration programs and provide technical support for development.

“The deployment of AI applications covers the cloud, the edge, and the mobile end, and has highly diverse requirements. TF2 can greatly improve the efficiency of application deployment across different ends and quickly adapt to the model inference requirements in different scenarios,” said Liu Jun, AI & HPC General Manager of Inspur Group. “AI users and developers are welcome to join the TF2 open-source community to jointly accelerate the deployment of AI applications and facilitate the implementation of more AI applications. ”

About Inspur

Inspur is a leading provider of data center infrastructure, cloud computing, and AI solutions, ranking among the world’s top 3 server manufacturers. Through engineering and innovation, Inspur delivers cutting-edge computing hardware design and extensive product offerings to address important technology arenas like open computing, cloud data center, AI and deep learning. Performance-optimized and purpose-built, our world-class solutions empower customers to tackle specific workloads and real-world challenges. To learn more, please go to www.银河赌城systems.com.

ABOUT US

Inspur Group Partners News Events

SUPPORT

Download Center Service & Warranty
  • E-Waste Collection Service
  • Security Bulletins
  • WHERE TO BUY

    Where to buy

    CONTACT US

    Contact Us Join Us

    FOLLOW INSPUR

    Facebook Instagram Twitter

    Copyright ? 2018 Inspur. All Rights Reserved.

    银河赌城 logo
    • Support:

      1-844-860-0011

    • Sales Inquiries:

      1-800-697-5893

    友情链接:正规彩票-彩票开奖  OG电子游戏官网|OG电子游戏网址  澳门黄金城_澳门黄金城赌城_澳门黄金城官网  澳门葡京网站_澳门葡京平台  欧洲杯官方投注平台  真钱扑克网址官网_真钱电玩平台_澳门在线游戏  澳门永利国际-游戏网址  pt电子游艺_pt老虎机_PT电子游戏  ag电子游戏 - ag电子游艺 - 亚洲最大线上电子平台  澳门赌场官网|网址网站  澳门美高梅|美高梅网投|官方网站  真人游戏官方网站_赌场官网  欧洲杯外围_足球投注_盘口  澳门银河娱乐城|官网网址  澳门赌博平台-澳门赌博正规网址  澳门金沙网站-官方网址  电竞游戏网站|澳门电竞平台  威尼斯人官网-威尼斯人澳门网站  365bet体育在线导航-365bet体育在线官网  ag电子游戏 - ag电子游艺 - 亚洲最大线上电子平台  网络真钱游戏_真钱网站  沙巴娱乐网址-沙巴体育APP|官网  澳门网上网投导航|手机现金网投|真钱网投总站  金沙国际网址|金沙棋牌  真钱扑克网址官网_真钱电玩平台_澳门在线游戏  沙巴体育官网-登录网址  电子游戏官方网站  棋牌中心|网络棋牌游戏|赚钱棋牌  欧洲杯外围_足球投注_盘口  澳门皇冠|澳门皇冠官网-网投领导者