中文English
AI chips are also known as AI accelerators or compute cards, which are modules specifically designed to handle a large number of computational tasks in AI applications (other non-c...

Ai chip principle?

AI chips are also known as AI accelerators or compute cards, which are modules specifically designed to handle a large number of computational tasks in AI applications (other non-computational tasks are still handled by the CPU). At present, AI chips are mainly divided into GPU, FPGA, and ASIC. Much of AI's data processing involves matrix multiplication and addition. A large number of Gpus working in parallel offer a cheap way to do this, but the disadvantage is higher power. Fpgas with built-in DSP modules and local memory are more energy efficient, but they are generally more expensive.

< br >

There are still big gaps in the underlying theory of AI. This is the basic premise that guides how chips are designed. Therefore, focus on how to better adapt to the existing data flow processing mode chip optimization design. The first chips in the AI market include various combinations of off-the-shelf cpus, Gpus, FPGas, and DSPS.