Samsung is Using AI to Design a Smartphone Chip. Will Others Follow? (arstechnica.com) 60
"Samsung is using artificial intelligence to automate the insanely complex and subtle process of designing cutting-edge computer chips," reports Wired:
The South Korean giant is one of the first chipmakers to use AI to create its chips. Samsung is using AI features in new software from Synopsys, a leading chip design software firm used by many companies...
Others, including Google and Nvidia, have talked about designing chips with AI. But Synopsys' tool, called DSO.ai, may prove the most far-reaching because Synopsys works with dozens of companies. The tool has the potential to accelerate semiconductor development and unlock novel chip designs, according to industry watchers. Synopsys has another valuable asset for crafting AI-designed chips: years of cutting-edge semiconductor designs that can be used to train an AI algorithm. A spokesperson for Samsung confirms that the company is using Synopsys AI software to design its Exynos chips, which are used in smartphones, including its own branded handsets, as well as other gadgets...
Chipmakers including Nvidia and IBM are also dabbling in AI-driven chip design. Other makers of chip-design software, including Cadence, a competitor to Synopsys, are also developing AI tools to aid with mapping out the blueprints for a new chip.
But Synopsys's co-CEO tells Wired that Samsung's chip will be "the first of a real commercial processor design with AI."
Others, including Google and Nvidia, have talked about designing chips with AI. But Synopsys' tool, called DSO.ai, may prove the most far-reaching because Synopsys works with dozens of companies. The tool has the potential to accelerate semiconductor development and unlock novel chip designs, according to industry watchers. Synopsys has another valuable asset for crafting AI-designed chips: years of cutting-edge semiconductor designs that can be used to train an AI algorithm. A spokesperson for Samsung confirms that the company is using Synopsys AI software to design its Exynos chips, which are used in smartphones, including its own branded handsets, as well as other gadgets...
Chipmakers including Nvidia and IBM are also dabbling in AI-driven chip design. Other makers of chip-design software, including Cadence, a competitor to Synopsys, are also developing AI tools to aid with mapping out the blueprints for a new chip.
But Synopsys's co-CEO tells Wired that Samsung's chip will be "the first of a real commercial processor design with AI."
Autolayout ? (Score:5, Interesting)
Huh ? Autolayout and auto track routing has existed for decades !! This seems more like (ab)use of the buzzword AI than anything fundamentally new
D.
Re: Autolayout ? (Score:5, Interesting)
Re: Autolayout ? (Score:5, Funny)
Buzzword is right when after reading all that fluff, you're left wondering, "yes, but what does it do?"
What does it do?!! What does it do?!? My god man, this new machine learning model is the paradigm for DevOps to integrate iot devices into seamless multi cloud support edge computing all in the data center. It just needs a blockchain back end with an augmented reality GUI, biometric authentication, all offered as a service managed by shadow IT.
Re: Autolayout ? (Score:2, Redundant)
Re: (Score:2)
Buzzword is right when after reading all that fluff, you're left wondering, "yes, but what does it do?"
What does it do?!! What does it do?!? My god man, this new machine learning model is the paradigm for DevOps to integrate iot devices into seamless multi cloud support edge computing all in the data center. It just needs a blockchain back end with an augmented reality GUI, biometric authentication, all offered as a service managed by shadow IT.
This comment caused buzzword overstimulation in my synaptic centers forcing a reboot. Good job. For once it wasn't whiskey making me pass out.
Re: (Score:1)
What, no RESTful q-bit microservices? How quaint.
Re: (Score:1)
Re: Autolayout ? (Score:5, Funny)
The difference is that when before the chip you designed looked like an indescribable molluscs nest on acid you would get demoted, now you get a raise.
Re:Autolayout ? (Score:5, Informative)
Those systems use an algorithm that is rule based to do layouts. Samsung claims to have developed an AI that has learned how to do layout through training (example layouts and presumably feedback from Design Rule Checks and simulated performance) which is able to produce novel designs with useful properties that rules alone would not have discovered.
If it works it would be a significant development.
Re: Autolayout ? (Score:2)
Re: Autolayout ? (Score:4, Funny)
And here I was reading the headlines and looking for evidence of SkyNet.
Don’t be silly, this is the software skynet used on itself to exponentially gain power.
Re: (Score:1)
And here I was reading the headlines and looking for evidence of SkyNet.
Don’t be silly, this is the software skynet used on itself to exponentially gain power.
I don't know, but it seems ground work for Zero One -- "the machine city (Animatrix)"
Dont Call it AI (Score:1, Flamebait)
As someone said in an another article - donâ(TM)t call it AI we are not there yet
Re: Dont Call it AI (Score:2, Interesting)
Let that corner of slashdot rant on about how it's not "real AI". Meanwhile I'm doing things which weren't possible 5 years ago, and were thought (5 years ago) to be impossible "without real AI".
Ignoring the essentialist garbage, we can instead observe that whenever we get serious new capabilities in computer technology, it's usually chip designers who are first to adopt it.
Re: Dont Call it AI (Score:5, Insightful)
Let that corner of slashdot rant on about how it's not "real AI". Meanwhile I'm doing things which weren't possible 5 years ago, and were thought (5 years ago) to be impossible "without real AI".
Ignoring the essentialist garbage, we can instead observe that whenever we get serious new capabilities in computer technology, it's usually chip designers who are first to adopt it.
That, however, is what has been happening in AI since the 1960s. The first chess computers were amazing because never before had anything other than a human been able to play chess. Now you look at the games they play and you can clearly see that they weren't "intelligent" but they were doing new stuff that five years before had been impossible. The same with expert systems doing diagnosis of patients, until we realised that there are edge cases that really matter and don't work so well. The same when the first computer won at the game of go and so on and on.
The funniest thing about this XKCD explaining why some computer things are hard [xkcd.com] is that it's now obsolete (look at the alt-text). You can now get machine vision object identification as a library and have been able to for some years.
"AI" is always one of the cutting edge areas of computer science research because the people trying to do "AI" are looking for the things that people can do but computers can't and trying to work out why and see if they can build a system which can bridge the gap. That doesn't mean that the things that they produce are actually "artificial intelligence", something we don't even have a clear definition of. Instead they are normally "failed" experiments on the route to building "general artificial intelligence" and sometimes, maybe, signposts in the right direction.
Re: Dont Call it AI (Score:4, Insightful)
I agree that intelligence is poorly defined, and "AGI" even more. And sure, you can talk about it as a continuous progress - although there are important leaps.
Sometimes you can say, "in a few years, we'll be able to...". If you only assume continuous progress, you'll usually get such predictions right. But if you assume leaps, you'll be dead wrong 9 times out of 10 - look at the alt text of that xkcd comic for an example. Yet leaps do happen. Modern ML has had several leaps.
The "It's not AI" cranks, though, haven't been paying attention. And the whole whine is an excuse for falling behind in their own field.
Re: Dont Call it AI (Score:5, Interesting)
I think it's funny how much people get their nickers in a knot over the intelligence of a machine. It ultimately points to the obscurity of measure which is a philosophical question.
If a computer can beat a chess master level player or go, than the computer is showing it's smarter. Such AI learned this ability similar to their human counter parts but in altered conditions. It doesn't make them less intelligent within the domain.
By saying AGI, you try to say human intelligence is better because it's generalized. This is a bias, generalized computing is no better than computers designed for a specific purpose and this follows that AI designed for a specific purpose is show a domain level of intelligence. Being generalized means being mediocre at many things and there is no reason that a combined level of current specialized AI cannot be "managed in this way".
By showing offense to calling it Intelligence you have made this a terrority that you believe humans have supremacy over. It's a zealous position.
Which arrives to the point. Machines are intelligent now but they aren't spiritual. They never arrive to a bias point with the zealousness that those who downplay AI have arrived, all while mixing words. They do not know their limits but they also do not have feverent positions. This is their limits... not one of intelligence because the reality there is these systems are getting smarter...
Re: (Score:2)
By saying AGI, you try to say human intelligence is better because it's generalized. This is a bias, generalized computing is no better than computers designed for a specific purpose and this follows that AI designed for a specific purpose is show a domain level of intelligence. Being generalized means being mediocre at many things and there is no reason that a combined level of current specialized AI cannot be "managed in this way".
Generalised intelligence is clearly, almost by definition, better than specific intelligence in some way or other. If you take a chess grand master and tell them "we're playing the game of chess as normal except the knight can move three times in a row" they will instantly understand and be able to adapt to be good players of the new, slightly altered, game of chess. Until you retrain it, your chess computer will continually lose at this new game, possibly even to the level of simply being unable to play
Re: (Score:2)
An expert is someone who learns more and more about less and less, until he knows everything there is to be known about nothing at all.
Re: (Score:2)
Generalised intelligence is clearly, almost by definition, better than specific intelligence in some way or other.
I agree. However your example to make this point is purely a thought experiment. We don't know how well a grand master would play with such an adjustment in rules. It may be the case that junior players would play better than masters given this rule adjustment. We simply do not know. Nor without experimentation, can we adequately say there isn't a modern AI solution that would permit the AI to be retrained for this rule adjustment in say 5 minutes, given appropriate weighting of new input. In the same way t
Re: (Score:2)
We don't know how well a grand master would play with such an adjustment in rules.
The fact that the grand master is able to play under the new rules at all is a little wonder and is the thing that defines general intelligence.
That same person operates a car to get home after the match, types an e-mail on his or her computer, uses a remote control to flip through the channels of their tv and cooks dinner. None of these (and a million other) things can be performed by the chess computer, even conceptually.
Re: (Score:2)
This is a bias, generalized computing is no better than computers designed for a specific purpose
How do you figure the specialized intelligence will be able to abstract a problem and apply the solution in adapted form to a new type of problem?
How do you figure a chess computer would produce the theories of relativity? Quantum mechanics?
In other words, you do not understand what is gained by having a general intelligence.
If a computer can beat a chess master level player or go, than the computer is showing it's smarter.
No, it's not.
Smart, as usually defined by humans, encompasses many more activities than just playing chess. You would not say that a person is smart because they play chess per se. A sm
Re: Dont Call it AI (Score:4, Insightful)
Let that corner of slashdot rant on about how it's not "real AI". Meanwhile I'm doing things which weren't possible 5 years ago, and were thought (5 years ago) to be impossible "without real AI".
That, however, is what has been happening in AI since the 1960s.
That is precisely his point. The field of AI has never been restricted to the pursuit of "general intelligence." It has always been focused on solving problems and completing tasks which are currently believed to require human thought. The solutions are rarely even much of a stepping stone towards general intelligence, because that is rarely the goal. The goal is usually to take a task which required human input and was thought to be too complex for traditional automation, and automate it anyway. Successful results often become the traditional automation methods of the near future.
Re: (Score:2)
That is precisely his point. The field of AI has never been restricted to the pursuit of "general intelligence." It has always been focused on solving problems and completing tasks which are currently believed to require human thought. The solutions are rarely even much of a stepping stone towards general intelligence, because that is rarely the goal. The goal is usually to take a task which required human input and was thought to be too complex for traditional automation, and automate it anyway. Successful results often become the traditional automation methods of the near future.
The goal of "Artificial Intelligence" research is, by definition the creation of intelligence, even if not "general intelligence". There are plenty of things that spin off from that, but at the point that "machine learning" experts stop having any hope of developing something which might in future help build "intelligence" is the point where they have stopped doing AI and are now doing "machine learning" research alone and independent of their original field. There is nothing wrong with that; it just isn'
Re: (Score:2)
I beg to differ.
The chip designers I know are still hanging onto 20 years old Verilog coding styles, and hardly knows about the RTL capabilities of SystemVerilog.
I'm not to blame them, because both Synopsys and Cadence took years to implement those features after they were standardized.
In this case we are talking about AI driven placement, which is transparent to the peasant physical design engineer. He runs the same script, perhaps adding a new command line flag there, and that's all he needs to know. If h
Re: (Score:2)
I think the problem is that of every fuzzy logic algorithm under the sun being labelled as AI even if it's no more intelligent than your average pocket calculator. And the more impressive examples of AI get drowned in this sea of trash marketing. I've seen some very impressive examples of text answers generated by AI but IDK if those examples where cherry picked.
AI can do some impressive things, one example here is AI re-skinning GTA5 in realtime with textures learned from real world:
https://youtu.be/22Sojt [youtu.be]
Re: (Score:2)
Yes, it is a tool that exists. It's even quite useful, even if used in place of actually understanding the problem one wants to solve, which is a bad thing.
But *it is not AI*!
It's a very shitty, oversimplified knock-of of natural neural nets. One that's as close to real neural nets as a perfectly spherical horse on a sinusoidal trajectory is to a horse race. (I've simulated proper spiking neural nets with complex synapses in the past. But I'm still nowhere near the features a real brain has.)
And just being
Re: (Score:2)
Artificial Intelligence means an autonomously learning and acting intelligence. Meaning it can form its own independent opinions.
I don't buy your definition.
I completely denounce your use of the word 'opinion'. An opinion is a complex mental state. Bacteria can act intelligently yet they have no brains to form opinions. Not even a neural net. So using a word like 'opinion' to define intelligence seems like a very poor choice of words.
Moreover, autonomy and action is already part of the definition of intelligence. Those are not the characteristics that separate artificial intelligence from natural intelligence. The only difference is
Re: (Score:1)
Also: No, it's porn producers ho first adopt it. ;))
Re: (Score:3)
Well, I call it "Artificial Ignorance", which nicely sums up what it is.
Re: (Score:2)
As someone said in an another article - donÃ(TM)t call it AI we are not there yet
"AI" is a completely meaningless term. What it means encompasses such a wide array of possibilities and contradictory default understandings in individuals as to generate unnecessary confusion while at the same time conveying no useful information.
The world is better off not using the term "AI". People should instead reference specific technologies or capabilities. Saying AI instead of GAN or AI instead of AGI or AI instead of decision tree serves no purpose other than to mislead and confuse.
Debugging (Score:5, Insightful)
Good luck debugging it.
Re: (Score:3)
Re: (Score:3)
And, assuming it really does come up with something useful, how do you patent something that was invented by an AI that everybody else in the industry is using too?
The AI is only doing routing so far. Since the tool isn't considered to be a person you can't steal its work, since the work doesn't belong to it; it belongs to whoever used the tool, or was paying their salary and has an agreement with them as a result that what they create on the clock becomes the property of the check writer. Since the AI is just a tool, it has no effect whatsoever on who created the work.
Re: (Score:2)
or was paying their salary and has an agreement with them as a result that what they create on the clock becomes the property of the check writer.
Lookit this guy here who can develop stuff off the clock and say they own it. Many employee agreements, legal or not, don’t even have this on the clock stipulation - you hand any and all over to your masters.
Re: (Score:1)
THIS here is exactly what I meant with my above post:
The patent goes to the *programmer* of the artificial neural network!
He programmed it, by adjusting its weights to give it the wanted output for the given input.
How do people's brains lapse so much, that they seriously believe it is a person on its own? You would have to *choose* to not think about it and how it works, to be even able to come to that conclusion, every time the topic is mentioned.
Not that that's impossible. It's just impossible with the cu
Re: (Score:2)
He programmed it, by adjusting its weights to give it the wanted output for the given input.
What if the weights are adjusted not by the programmer but by an evolutionary algorithm and what if the inputs are from the real world and not hand selected (nor totally anticipated) by the programmer?
How do people's brains lapse so much, that they seriously believe it is a person on its own?
Because people project a lot. To some people a pebble moved by turbulent water can seem alive.
Couple that with a general lack of understanding of what intelligence is and people (in my opinion including you) will think a number of false things about intelligence.
Not that that's impossible. It's just impossible with the current tensor-based approaches that dump all of the realistic and essential effects of real neurons for "I don't care for anything, I just want SPEEED!". ... As they say: The computer is the fastest idiot in the world.
I think you're not going far enough in this line
Re: (Score:2)
No problem, it's just the physical layout that's AI-driven.
It's like saying you can't debug your code because the compiler is AI-driven,
and for chip design, if you need to debug actual cells on the synthesized netlist, it is already quite messy even without AI.
Logic gets optimized out if possible, duplicated when you need to drive large number of inputs, and signals are retimed if you allow your tool to do it,
which brings it whole new logical content not found in your design.
Re: (Score:1)
AI will debug it also. Just don't ask to have the pod bay doors opened.
Chip designers already used "AI" (Score:3)
Except it was classical AI with IIRC the occasional gen etic algorithm thrown in. This is simply using a neural net though I'm struggling to see how it'll do any better as this kind of thing requires absolute 100% cannot-be-wrong precision, not some best effort that works 99.9% of the time but occasionally thinks that add means div if it gets a psrticular number. I jest, but this is not an area where ANNs should really be used. IMO obv.
Chips making chips? (Score:4, Funny)
Re: (Score:2)
I'm pretty sure you're thinking of Terminator, but I'm not gonna judge your religion.
Re: (Score:1)
Re: (Score:2)
Re: (Score:2)
Try the Orange Catholic Bible.
Not seeing it... (Score:4, Interesting)
Re:Not seeing it... (Score:4, Informative)
It's all a question of complexity. If the current algorithms runs for hours or days and gets you worse results than AI algorithms that runs much faster, then it is an improvement. You can't "debug" either the traditional or the new output.
What you usually do is take the new netlist and formally compare it to the original logical design written in Verilog (formal equivalence tools).
If you want to be sure, you extract the netlist from the final GDS-II layout.
In both cases , AI or not, you use the same techniques and tools to formally verify your output.
Re: (Score:2)
I assume that, by "AI", they mean applying neural networks somewhere and somehow. I'm not seeing it... Chip design and layout is just about the most rule-oriented process imaginable. There's a whole plethora of tools to support the process. What can a neural net contribute? And when (not if, but when) it goes wrong, how the devil are you going to "debug" it?
Explainable AI... LOL.... more likely by (re)training until you start to get useful answers or give up.
Would think of the role of these things as an optimization problem like classic TSP. The brute force method is infeasible and algorithm development takes world class talent so what's left is throw 8-bit math at the problem and train up something that does a good enough job.
While hard to find optimal solutions the solutions you do come up with can relatively easily be checked for both correctness and fitne
No, they are not (Score:4, Funny)
They are doing some of the optimization using Artificial Ignorance, that is it. And it remains to be seen how well that actually works, because almost universally results from AI optimization are _worse_ than ones done manually. They are cheaper though, but if you have large numbers od the design produced, it payss off to invest a lot more into the design.
Waiting for Watson! (Score:2)
Because Watson could do AI before everyone was already hyped up about blockchain technology.
So once Watson picks up this trick, all others will be defeated.
Pseudo-flattening? (Score:2)
Read the blog post not TFA (Score:5, Informative)
Sounds like there is something to it, which you would expect when there are billions of dollars to throw at these problems.
https://blogs.synopsys.com/fro... [synopsys.com]
DSO.ai, which was honored with a 2020 World Electronics Achievement Award for “Innovative Product of the Year,” provides:
A reinforcement-learning engine that can explore trillions of design recipes ...
The ability to simultaneously pursue many complex objectives, formulated in the performance space of chip design (PPA)
Full integration with the Synopsys Fusion Design Platform, which includes RTL design and synthesis, physical implementation, physical verification, signoff, test automation, flow automation, and multi-die system design and integration tools for full-flow quality-of-results (QoR) and time-to-results (TTR)
Cloud readiness for fast deployment in on-premises, public, and hybrid clouds
After adopting DSO.ai in its advanced automotive chip design environment, Renesas has experienced the solution’s ability to autonomously converge to PPA targets and enhance overall design team productivity. In a recent production tapeout, a North American integrated device manufacturer was able to achieve up to 15% better total power, 30% better leakage, and 2 to 5x faster convergence using DSO.ai – all with a single engineer, in just weeks. An Asia-Pacific global electronics powerhouse was able to identify PPA solutions that were simply deemed ‘unattainable’ with traditional techniques, meeting timing constraints weeks ahead of schedule and boosting maximum frequency by hundreds of MHz. (Learn more by watching our SNUG World 2021 executive panel, “How Is AI Changing the Way We Approach Chip Design?” You’ll need to log in to access the session on demand.)
To borrow from Archer . . . (Score:2)
Meme Police! (Score:2)
The actual quote from Archer is "Do you want ants? Because that's how you get ants." [youtube.com]
So your post should have been "Do you want Skynet? Because that's how you get Skynet."
And so it begins (Score:1)
Playing Chess (Score:2)
Something similar: The IBM Watson system, which won the chess championship match against the human champion Gary Kasparov, was rule based. Since then, an AI based system, which can be trained in a few days by playing against itself has proved much superior.
who will be responsible for design failure ? (Score:2)
I used to work with circuit design software in the 90s. Fun and challenging. I'd love to have a good look at how this new software works. But there are questions that must be answered:
How will humans be able to verify the quality of a circuit designed by an AI? Is there a way to have the AI explain WHY it used certain design features instead of others? Will the AI be granted patents on its designs? Who will be responsible when flaws are discovered in the new design after it has been deployed to millions of
Not for its software? (Score:3)