Nvidia H100 Powerful New Chip Strives to Support AI

Nvidia is unleashing a new chip, the H100 “Hopper,” that has the prospect of speeding up artificial intelligence that’s sweeping the tech industry.

The chip allows to cement Nvidia’s lead in technology, revolutionizing everything in computing, from self-driving cars to solving language as people speak.

Nvidia will start marketing a new AI acceleration chip later this year, part of its measures to ensure its leadership in a computing revolution. The more immediate chip should let AI developers speed up their research and create more advanced AI models, especially for tough challenges like comprehending human language and driving self-driving cars.

The H100 “Hopper” processor, Nvidia Chief Executive Jensen Huang disclosed in March, is expected to start shipping next quarter. The processor includes a whopping 80 billion transistors and dimensions of 814 square millimeters, which is almost as large as is physically possible with today’s chipmaking equipment.

The H100 contends with power-hungry massive AI processors like AMD’s MI250X, Google’s TPU v4, and Intel’s forthcoming Ponte Vecchio. Such chips are goliaths most often located in the preferred environment for AI training systems, data centers sealed with racks of computing gear and laced with fat copper power cords.

The new chip signifies Nvidia’s evolution from a developer of graphical processing units used for video games to an AI powerhouse. The business did this by adapting GPUs for the detailed mathematics of AI, like multiplying arrays of numbers.

Circuitry for racing up AI is becoming increasingly important as the technology comes in everything from iPhones to Aurora, predicted to be the world’s speediest supercomputer. Chips such as the H100 are critical for speeding up tasks like training an AI model to automatically interpret live speech from one language to another or generate video captions. In addition, faster performance means AI developers can tackle more challenging tasks like autonomous vehicles and speed up their experimentation. Still, one of the most significant improvement areas is processing language.

Linley Gwennap, a critic at TechInsights, says the H100 and Nvidia’s software tools set their position in the AI processor demand.

“Nvidia towers over its competitors,” Gwennap documented in a report in April.

Pindrop, a longtime Nvidia consumer who utilizes AI-based voice analysis to support customer service representatives to ascertain legitimate clients and spot scammers, expresses that the chipmaker’s steady advancement has let it expand to recognizing audio deepfakes. Deepfakes are advanced computer simulations that can be used to commit fraud or circulate misinformation.

“We couldn’t get there if we didn’t contain the latest generation of Nvidia GPUs,” said Ellie Khoury, the company’s research director.

Training their AI system implicates processing an enormous amount of information, including audio data from 100,000 voices, each manipulated in several ways to simulate background chatter and bad telephone connections. As a result, H100 advancements, like expanded memory and faster processing, are essential to AI customers.

Nvidia calculates its H100 is six times faster than the A100 predecessor the business launched two years ago. One crucial area that benefits are natural language processing. Also understood as NLP, the AI domain allows computers to appreciate your speech, summarize documents and translate languages, among other assignments.

Nvidia is a powerful player in NLP, a field at the vanguard of AI. For example, Google’s Palm AI system can knock apart cause and effect in a sentence, write programming code, illustrate jokes, and recreate the emoji movie game. But Nvidia’s adjustable GPUs are popular with researchers. For instance, this week, Meta, Facebook’s parent company, unleashed sophisticated NLP technology unrestricted to accelerate AI research, and it operates on 16 Nvidia GPUs.

With the H100, NLP investigators and product developers can act faster, said Ian Buck, vice president of Nvidia’s hyper-scale and high-performance computing batch. “What took months should take shorter than a week.”

The H100 delivers a big step up in transformers, an AI technology developed by Google that can evaluate the importance of context around words and notice subtle relationships between information in one area and another. Data like photos, speech, and text used to train AI must often be carefully labeled before use. Still, transformer-based AI models can utilize raw data like extensive tracts of text on the web, told Aidan Gomez, co-founder of AI language startup Cohere.

“It reads the internet. It devours the internet,” Gomez said, then the AI model turns raw data into beneficial information that seizes what humans know about the world. The consequence transformers “shot my timeline forward decades” when it comes to the speed of AI progress.

We all stand satisfied with the H100’s ability to accelerate AI research and development, told Hang Liu, an assistant lecturer at the Stevens Institute of Technology. For example, Amazon can see more fake reviews, chipmakers can spread out chip circuitry better, and a computer can twist your words into Chinese as you speak them. “Right now, AI is entirely reshaping almost any section of commercial life.”