KAIST develops graph-based AI SSD for the first time in the world

기사입력 : 2022-01-11 09:32

  • 폰트 크기 작게
  • 폰트 크기 크게
  • 페이스북 공유하기
  • 트위터 공유하기
  • 카카오스토리 공유하기
Hardware prototypes and evaluation configurations. Photo=KAIST
Hardware prototypes and evaluation configurations. Photo=KAIST


A research team led by Professor Jung Myung-soo of Department of Electrical and Electronic Engineering has developed the world's first graph-based artificial intelligence (AI) SSD accelerator.

According to KAIST on the 10th, this technology accelerates the entire process of graph-based neural network machine learning, including graph processing and sampling. It also directly processes data near data storage.

This new machine learning model with graph data structure can describe data relations unlike previous neural network-based machine learning techniques.

Accordingly, it is expected to be used in wide range of fields, including large-scale SNSs such as Facebook and Google, navigation, and new drug development.

This model initially had limitations in actual system application due to memory shortage and bottleneck problem during data processing.

However, the newly developed technology will directly accelerate all process of interference near the storage where the graph data is stored.

In other words, it has solved the bottleneck problem in the graph machine learning data processing by accelerating graph processing and sampling near storage.

In addition, as common type of computational storage processes data through fixed firmware and hardware configurations in the device, there were limited to use.

In order to handle it, the research team designed various hardware structures need for AI interference process, a software that can program machine learning model with large number of graphs, and a neural network acceleration hardware framework structure that allows users to adjust.

The research team also built the prototype of the computational storage to verify the effectiveness of the Holistic GNN technology. It installed the RTL developed for graph machine learning and software framework on this prototype.

As a result, the research team has confirmed that it can speed up 7 times faster and save 33 times more energy than machine learning accelerated computing using the latest high-performance NVIDIA’s GPU.

Professor Jung said, “This technology is expected to replace the existing high-performance acceleration system, and will be applied to wide range of fields, such as large-scale recommendation system, traffic estimation and prediction systems, and new drug development.”


by Global Economic Reporter Won-yong Lee ; Translate by Gounee Yang