2, based on AMPERE (ampere) architecture A100 GPU using TSMC 7nm process to build, the core area of 826m square meters, the number of transistors as high as horrible54.2 billion, is more than 5 times the Volta GV100 GPU, the latter is the TSMC 12nm process, the area of 815m square meters, the number of transistors is 21.1 billion, which can be seen in TSMC 7nm process improvement. In February next year, NVIDIA's 7nm GPU or will lead the business surge AMD RX 5700 series of 7nm Navi architecture graphics card release, so that NVIDIA faces serious competition from AMD, especially in the gaming and data center market. NVIDIABy mocking the energy efficiency ratio of the RX 5700 series, and preparing to launch a stronger RTX 20 Super series, the intention is to suppress the opponent (12nm process compared to the RTX 20 Super). 4, [CNMO News] At present, in the chip field, the adoption of 7nm process has become very widespread, but the graphics card seems to have been slow to enter the era of 7nm process. Recently, according to foreign media reports, NVIDIA plans to release a new generation of GPU in the first half of next year, code-named "Ampere (Ampere)." Meanwhile, the source saidid=' Nvidia this AI chip to enter the automotive circle? BMW, Xiaopeng, FF have followed up '> NVIDIA this AI chip into the automotive circle? BMW, Xiaopeng, FF have followed up
The most powerful AI chip on the surface will write car stories. On May 14, in the GTC 2020 video released by Nvidia, CEO Huang Renxun officially released Ampere Ampere, a new generation GPU architecture. Each generation of Nvidia graphics card architecture is named after top scientists, this time named after French physicist Ampere. This is the eighth generation GPU architecture Nvidia has launched, compared to the previous oneThe Turing architecture released in 2018 has improved performance by up to 20 times.
In the recent 2024 annual GTC conference, NVIDIA Huang Renxun announced three Thor chip cooperation car companies, respectively, BYD, Hao Platinum, Xiaopeng.
Although Nvidia is the double hegemon of the computing power center chip and the on-board computing power Soc, it still has a sense of crisis. At the 2023 Guangzhou Auto Show, Denza, Jiyue, Weilai, Xiaopong, Jikrypton, BYD and other cars indicated the use of NVIDIA computing chips, mainly Orin (single chip 254Tops), and the use of computing power is strongerThe next generation of chips THOR (2000Tops), only a few new power manufacturers of new models.
According to this rule, not only the Chinese version of Nvidia A100/100 computing chip A800/H800 is also banned, but also the product RTX4090 graphics card, which is mostly used by game enthusiasts. As a supporting measure, the United States will expand the object of control to 21 countries and regions to prevent re-exports from third countries. At the same time, domestic AI chip suppliers such as Moore Thread and Bi Ren were included in the entity list.
In the same year, Audi's old rival BMWIn a deal with Nvidia, BMW will replace the in-car screens of all its models with Nvidia Gpus. With the approval of BMW and Audi, Nvidia subsequently received more and more orders from traditional car companies. With excellent graphics capabilities, Nvidia officially entered the automotive chip market.
When entering the "Smart car 0 era competition Year" node in 2023, the configuration of the internal volume to a higher level. At the time of 8155 large-scale car mass production landing, Qualcomm 8295 chip noise, has been launched in the car robot ROBO-01 and will be delivered within the year, while zero run, Zhiji and other carsCompanies are following suit.
A100 series is a series of computation-accelerated graphics cards, which are introduced for high-performance computers in data centers for artificial intelligence, scientific, industrial and other operations. The GPU chip of this series is code-named GA100, with a core area of 826 square millimeters and 54 billion transistors, and the new product is equipped with a maximum of 80GB video memory, using HBM2e high-speed video memory chip.
A100 seriesThe column is the computing accelerated graphics card, the series of products for the data center high-performance computer launched for the artificial intelligence science industry and other operations, the series of products GPU chip code GA100, the core area of 826 square millimeters, with 54 billion transistors, the new maximum equipped with 80GB video memory, using HBM2.
The A100 graphics card belongs to the GPU (graphics processing unit) class of high-end computing and data center level. It is a high-performance computing work launched by NVIDIA for deep learning, machine learning, scientific computing and large-scale data processingLoaded professional grade graphics card. The A100 is powered by NVIDIA's Ampere architecture and features excellent computing performance, highly parallel processing, and massive memory support.
2. The NVIDIA B200 does not break Moore's Law. Moore's Law states that the number of transistors integrated on a chip doubles every 18-24 months, and the NVIDIA B200, as an accelerator or processor, continues to evolve and improve performance within the predicted range of Moore's Law. Moore's Law, proposed by Intel co-founder Gordon Moore in 1965, has been a guiding principle for the development of the semiconductor industryIs.
3, from Pascal architecture to Blackwell architecture, over the past eight years, Nvidia has improved AI computing performance by 1,000 times. "Under the Blackwell architecture, the chips can be connected to build large AI supercomputing clusters to support larger computing needs." Huang said that the shape of the GPU has completely changed, and the future Nvidia DGX AI supercomputer is the factory of the AI industrial revolution.
4, Nvidia's AI chips not only have applications in the cloud, but also have applications on the end side. For example, it can be used in robots, drones and other devicesB, through the AI algorithm to achieve autonomous navigation, target recognition and other functions. In addition, Nvidia's AI chips can also be used in data centers, gaming, cloud computing and other fields to provide faster and more stable computing performance for these applications.
5, NVIDIA AI powerful reasons are as follows: Nvidia in the field of AI technology to achieve a full chain layout from the underlying chip hardware to the upper layer of software, algorithms. This advantage allows Nvidia to provide targeted AI technology empowerment to users across the market and improve their experience. Nvidia founder and CEO Jen-Hsun Huang is keenly aware of the depth of graphics cardsDegree learning has excellent performance, so strengthen the graphics card in the field of artificial intelligence research and development.
1, the application of AI chip in the field of speech recognition AI chip in the field of smart home application of AI chip in the field of face recognition application summary: AI chip as the core of artificial intelligence technology, its application scenarios will be more and more extensive, in the future will become an indispensable part of People's Daily life. 2, Nvidia's AI chip B200 is a high-performance artificial intelligence processor optimized for edge computing, which provides efficient reasoning and low power characteristics for a variety of edge devices and real-time applications. Nvidia has been a leader in the field of artificial intelligence, and its AI chip series has been popular in the market. Among them, the B200, as a chip designed specifically for edge computing, has many striking features.
3, the AI chip can use a deep learning intelligent chip, which can quickly realize the hardware of commonly used computing functions and its required energyThe power consumption is lower than that used by traditional chips. 02 The use of AI chips on mobile phones makes our smart phones can also be automated like computers, and can also make our mobile phones have longer battery life and independent learning ability.