Meet the Biggest AI PC-Sized Supercomputer?

avatar

Random Curiosity

Do you like AI? Then this is for you. Check out this CRAZY HUGE ASIC! The biggest I know off... I have also posted this via SteemHunt. But here you will find some additional content/comments.

jszykn.png
(source)

To start with, the case is amazingly beautiful.

g1df9j.png
(source)

The wafer is not just big, but also the fastest inter-communication chip I ever heard of... with JUST 400000 cores! 😱 That's right, not a mistake, 400K cores! If you look at the comparison with one of the largest GPUs chips. It's simply impressive...

zp8sur.png
(source)

This is a product from cerebras, and it aims to compete with the most powerful supercomputers in the world.

The package!

bt0ziq.png
(source)

Super attractive right (check at the end of the post)? Even looks like a miner 🤑 if you know what I mean. Can it mine? LOL Probably no... until someone codes an AI-based blockchain algorithm where you are able to mine with AI ASICs (probably will happen).

Comparisons...

Just to blow your mind! Look at the differences when compared with some of the Enterprise AI driven GPUs. Crazy right? Remember that this is a super specific AI Chip.

([source]())

You will have to use dedicated software in order to use their hardware. Some support for TensorFlow and others, but be prepared to be stuck with their "speed" of support. That's not necessesarily bad, but not the same thing as popular open source stuff.

Memory wise it's not a good fit, in case your application requires it. But if what you are looking is just speed of iterations and/or near real time performance for AI decisions, then this is massive! Not sure how many will understand how much bandwidth this puppy is able to do.

On top of all of this, the power this baby needs is super interesting. Above you see 15KW, but on the website you see up to 20KW. Which makes me believe that this thing can somehow "auto-overclock" depending on the work being done.

I mean its a very large chip, so potential for that is a must in my view. Why? In ASICs (especially large ones) if you are not using all the ASIC to compute, it means not all the ASIC its at it's maximum temperature, meaning you can overclock it to juice in some more power of computation.

That's valid for any recent ASIC nowadays.

+9 PBytes/s of Memory Bandwidth!!!

Which from the company advertised 100 Pbits/s of interconnect... you have a raw 12.5 PBytes/s. The 9 PBytes/s is really a great value and utilization efficiency I would say (because you never get the theoretical value, due overheads and other control communication) of the available interconnect capacity.

I have did more investigation into the website and this is what gets official:
uawnn7.png
(source)

Server measurements

As you see above, its about 15U size. So not huge, but a big machine. The IBM Blade Chassis I was used to work with, were 7U, 9U and 12U with the most recent models. So 15U its not super exagerated. Big yes! But for 400000 cores... tiny!

The DIE size

If you look closer, you can even see holes all across the chip. This baby needs to bolted in MANY places to make contact with the WHATEVER board it is designed to work in.

Cooling

On top of it, to remove those 15 KW, you need a water cooled system. Check it out!
e602e6.png
(source)

Do you know about AI?

Please share with me your deepest thoughts about this hardware. If I am lucky I might have access to one maybe... in a couple of months.

eSteem Footer Line

[email protected]
eSteem.app | GitHub | YouTube
Telegram | Discord

Posted with eSteem

Cryptocurrency 💳Cards

Crypto.comCrypteriumTenXRevolut

by @forykw



0
0
0.000
14 comments
avatar

But can it run Crysis?

0
0
0.000
avatar

LOL No mate, this is like a "miner" (very specific ASIC). It will only serve the purpose of running specific software to solve AI problems.

0
0
0.000
avatar

Lmao, it's a joke from the early 2000's when Crysis came out, the requirements were so high that few computers at the time could run it.

0
0
0.000
avatar

I know the joke. But, you never know these days... I was thinking you too much "instructed" to be playing naive with that joke. Glad you clarified. =) LOL

0
0
0.000
avatar

Hi, @forykw!

You just got a 5.34% upvote from SteemPlus!
To get higher upvotes, earn more SteemPlus Points (SPP). On your Steemit wallet, check your SPP balance and click on "How to earn SPP?" to find out all the ways to earn.
If you're not using SteemPlus yet, please check our last posts in here to see the many ways in which SteemPlus can improve your Steem experience on Steemit and Busy.

0
0
0.000
avatar

Hi @forykw!

Your post was upvoted by @steem-ua, new Steem dApp, using UserAuthority for algorithmic post curation!
Your UA account score is currently 3.807 which ranks you at #5158 across all Steem accounts.
Your rank has dropped 5 places in the last three days (old rank 5153).

In our last Algorithmic Curation Round, consisting of 85 contributions, your post is ranked at #65.

Evaluation of your UA score:
  • You're on the right track, try to gather more followers.
  • You have already convinced some users to vote for your post, keep trying!
  • Try to work on user engagement: the more people that interact with you via the comments, the higher your UA score!

Feel free to join our @steem-ua Discord server

0
0
0.000
avatar

With the speed of progress, you will have these wonder-machines in your pocket, in your phone in a decade.

0
0
0.000
avatar

Not even a decade... probably in 2 years (as far I know some projects have just got funded..).

0
0
0.000
avatar

hi @forykw

well with all these advances soon we will have these computers showing us propaganda in our minds. Hopefully they are used to create medical art and technology, I find that they are very necessary things for us to evolve as humanity.

Posted using Partiko Android

0
0
0.000
avatar

AI is being pushed as a technology by many Businesses and Economies, especially because after maturation of the technology, it will allow us to solve much bigger problems with much less hardware computation (when compared with the current traditional computation approaches).

0
0
0.000