Startup chipmaker Cerebras Systems Inc. announced that it has built the first of nine artificial intelligence supercomputers in a partnership with Abu Dhabi, part of an effort to provide alternatives to systems using Nvidia Corp. technology.

Condor Galaxy 1, located in Santa Clara, California, is now up and running, according to Cerebras founder and Chief Executive Officer Andrew Feldman. The supercomputer, which cost more than $100 million, is going to double in size “in the coming weeks,” he said. It will be followed by new systems in Austin and Asheville, North Carolina, in the first half of next year, with overseas sites going online in the second half of 2024.

ALSO READ | Powering up India’s supercomputing ambitions

The project is part of a dash to add computing power for AI services, which require the kind of heavy-duty processing that’s become a specialty of Nvidia — the world’s most valuable chipmaker. The Cerebras machinery, which Feldman describes as the biggest purpose-built AI computing centre, is an attempt to satisfy that need with a novel approach.

It also marks a deeper push into the field by the United Arab Emirates, which is betting on next-generation technology with a firm called Group 42, or G42. The company is focused on pushing artificial research toward practical uses in areas such as aviation and health care.

The new supercomputers will be operated by Cerebras and used for G42 projects. Any excess capacity will be offered commercially as a service.

“Abu Dhabi was the first nation (sic) to have a minister for AI. They have a university for AI,” said Feldman. “They believe this is a transformative technology for their economy.”

Where does India stand in the global supercomputer race?

For Cerebras, based in Silicon Valley, the new systems provide a showcase that it hopes will lead to wider adoption. The company’s offerings rely on massive chips that are made out of whole silicon wafers — disks that are normally sliced ​​up to create multiple components.

Feldman argues that his processors have the advantage of being able to deal with large data sets in one go, rather than only working on portions of the information at a time. Compared with Nvidia’s processors, they also require less of the complicated software needed to make chips work in concert, he said.

This year, cloud computing providers such as Microsoft Corp. and Inc.’s AWS have been stocking up on Nvidia processors to keep up with runaway demand for OpenAI’s ChatGPT and other generative AI tools. Nvidia has about 80 per cent of the market for the so-called accelerators that help handle these workloads.

With his computing rollout, Feldman aims to demonstrate that the AI ​​explosion won’t just benefit the giant tech companies that can afford big-budget equipment.

ALSO READ | Nvidia to build Israeli supercomputers as AI demand soars

“There is a misconception that there are only seven to 10 companies in the world that could buy at scale to make a difference,” he said. “This vastly changes the conversation.”

Feldman’s processors are so large they won’t fit in traditional machinery, leading Cerebras to offer its technology in specially built computers. The machines also rely on standard processors from Advanced Micro Devices Inc. — the company that bought Feldman’s previous startup, SeaMicro Inc.

One of the new supercomputers will be capable of training software on data sets made up of 600 billion variables, with the ability to increase that to 100 trillion, Cerebras said. Each one will be comprised of 54 million AI-optimised computing cores.

Source link

By admin

Leave a Reply

Your email address will not be published. Required fields are marked *