


Microsoft's Analog Optical Computer Shows AI Promise (microsoft.com) 33
Four years ago a small Microsoft Research team started creating an analog optical computer. They used commercially available parts like sensors from smartphone cameras, optical lenses, and micro-LED lights finer than a human hair. "As the light passes through the sensor at different intensities, the analog optical computer can add and multiply numbers," explains a Microsoft blog post.
They envision the technology scaling to a computer that for certain problems is 100X faster and 100X more energy efficient — running AI workloads "with a fraction of the energy needed and at much greater speed than the GPUs running today's large language models." The results are described in a paper published in the scientific journal Nature, according to the blog post: At the same time, Microsoft is publicly sharing its "optimization solver" algorithm and the "digital twin" it developed so that researchers from other organizations can investigate this new computing paradigm and propose new problems to solve and new ways to solve them. Francesca Parmigiani, a Microsoft principal research manager who leads the team developing the AOC, explained that the digital twin is a computer-based model that mimics how the real analog optical computer [or "AOC"] behaves; it simulates the same inputs, processes and outputs, but in a digital environment — like a software version of the hardware. This allowed the Microsoft researchers and collaborators to solve optimization problems at a scale that would be useful in real situations. This digital twin will also allow other users to experiment with how problems, either in optimization or in AI, would be mapped and run on the analog optical computer hardware. "To have the kind of success we are dreaming about, we need other researchers to be experimenting and thinking about how this hardware can be used," Parmigiani said.
Hitesh Ballani, who directs research on future AI infrastructure at the Microsoft Research lab in Cambridge, U.K. said he believes the AOC could be a game changer. "We have actually delivered on the hard promise that it can make a big difference in two real-world problems in two domains, banking and healthcare," he said. Further, "we opened up a whole new application domain by showing that exactly the same hardware could serve AI models, too." In the healthcare example described in the Nature paper, the researchers used the digital twin to reconstruct MRI scans with a good degree of accuracy. The research indicates that the device could theoretically cut the time it takes to do those scans from 30 minutes to five. In the banking example, the AOC succeeded in resolving a complex optimization test case with a high degree of accuracy...
As researchers refine the AOC, adding more and more micro-LEDs, it could eventually have millions or even more than a billion weights. At the same time, it should get smaller and smaller as parts are miniaturized, researchers say.
They envision the technology scaling to a computer that for certain problems is 100X faster and 100X more energy efficient — running AI workloads "with a fraction of the energy needed and at much greater speed than the GPUs running today's large language models." The results are described in a paper published in the scientific journal Nature, according to the blog post: At the same time, Microsoft is publicly sharing its "optimization solver" algorithm and the "digital twin" it developed so that researchers from other organizations can investigate this new computing paradigm and propose new problems to solve and new ways to solve them. Francesca Parmigiani, a Microsoft principal research manager who leads the team developing the AOC, explained that the digital twin is a computer-based model that mimics how the real analog optical computer [or "AOC"] behaves; it simulates the same inputs, processes and outputs, but in a digital environment — like a software version of the hardware. This allowed the Microsoft researchers and collaborators to solve optimization problems at a scale that would be useful in real situations. This digital twin will also allow other users to experiment with how problems, either in optimization or in AI, would be mapped and run on the analog optical computer hardware. "To have the kind of success we are dreaming about, we need other researchers to be experimenting and thinking about how this hardware can be used," Parmigiani said.
Hitesh Ballani, who directs research on future AI infrastructure at the Microsoft Research lab in Cambridge, U.K. said he believes the AOC could be a game changer. "We have actually delivered on the hard promise that it can make a big difference in two real-world problems in two domains, banking and healthcare," he said. Further, "we opened up a whole new application domain by showing that exactly the same hardware could serve AI models, too." In the healthcare example described in the Nature paper, the researchers used the digital twin to reconstruct MRI scans with a good degree of accuracy. The research indicates that the device could theoretically cut the time it takes to do those scans from 30 minutes to five. In the banking example, the AOC succeeded in resolving a complex optimization test case with a high degree of accuracy...
As researchers refine the AOC, adding more and more micro-LEDs, it could eventually have millions or even more than a billion weights. At the same time, it should get smaller and smaller as parts are miniaturized, researchers say.
So, correct me if I'm wrong (Score:2)
But at least at a first read through TFA, it appears the titular optical computer doesn't actually exist yet - to this point, everything is happening on the "digital twin".
Re: (Score:2)
It took me a while to find it, but it looks like they have actually built something -> https://news.microsoft.com/sou... [microsoft.com]
Re:So, correct me if I'm wrong (Score:5, Insightful)
It took me a while to find it, but it looks like they have actually built something -> https://news.microsoft.com/sou... [microsoft.com]
What they've built is a 8-variable optical computer. They're hoping to scale this up soon, but the amount of scaling isn't mentioned.
Of course, this completely misses the key challenge of AI computing. The ALU/compute part is the easy part. It's a small part of the chip and it consumes a small part of the power. The key problem is data movement, particularly how to quickly and efficiently grab billions of variables from memory, send them to billions of compute units, then send those outputs to the next set of billions of computer units, and then back to memory. This is one of the reasons that GPU hardware has done so well in the AI space. Microsoft's optical computer, even if it's wildly successful only addresses a small part of the challenge.
Re: (Score:3)
The webpage this article links to says: "we rely on a physical system to embody the computations and step away from several fundamentally limiting aspects of digital computing â" avoiding the separation of compute from memory..."
I won't try to vouch for this technology or its scalability but it does sound like are atta
Re: (Score:3, Informative)
"computation and storage happen at the same place in AIM, unlike binary computers, which need memory in one location and compute in another to function."
This is a limitation of the Von Neumann architecture, not due to it being "binary" per se, and is known as as the Von Neumann bottleneck.
Memory+compute chips certainly can be designed so that arithmetic digital logic is physically adjacent to the data it's operating on. The idea is called compute-in-memory (or in-memory computing) and it's an area of active research: link to a paper on the topic [arxiv.org]
Of course, computing in memory requires a completely different way of thinking about software than the CPUs or GPU
Re: (Score:3)
Which is... what they're doing.
This is an implementation of something I've been talking about for years. The brain is much more efficient than today's AI chips - not because "wetware" is efficient (it's terribly inefficient, with a huge amount of overhead and numerous chained loss steps), but because analog accumulation is efficient compared to vector math for AI tasks. I
Re: (Score:2)
Maybe you could put a few trainable digital layers sprinkled here and there to make it viable for a long time and still come out way ahead on inference costs over a pure digital model. "How to speak english" and "the appearance of objects in the world" and "history" don
Re: (Score:3)
You want the laws of physics to do the "math" for you. For the input field of a neuron in a DNN, instead of multiplying two vectors.... flow paths...resistance.. to the flow on that path (in the case of light, optical..transparency, respectively). You then need a physical nonlinear activation function (with a bias) based on how much flow accumulated... leaves that neuron to the next layer...It might require predictive coding networks... if we ended up switching to PCNs, as they have all sorts of great properties (including realtime learning).
Blah blah blah blah.
All I got was: Positronic Brain!
(And possibly: "Resistance is Flow, not Futile".)
Re: (Score:2)
The link directly above says: "computation and storage happen at the same place in AIM, unlike binary computers, which need memory in one location and compute in another to function."
The webpage this article links to says: "we rely on a physical system to embody the computations and step away from several fundamentally limiting aspects of digital computing â" avoiding the separation of compute from memory..."
I won't try to vouch for this technology or its scalability but it does sound like are attacking data movement.
This idea of doing compute at storage or memory is an old idea. It works fairly well if the memory stream is serial and the serial streams are independent and fails completely for AI because the memory streams are basically everything to everything. The "easy" way to address this problem is a crossbar that allows all the data to flow anywhere, but, of course, that doesn't work because a crossbar of that scale is impractical in size, power, and efficiency. How to design this memory subsystem is the key ch
Re: So, correct me if I'm wrong (Score:2)
They are exploring light for movement across *different* GPUs:
https://tech.yahoo.com/ai/arti... [yahoo.com]
I am actually convinced this optical path will turn out right. The status quo is all the deterministic math you describes happens, to produce an output vector with a randomly chosen result. Why not use the soft stochastic qualities of analog light computing to get the same sculpted random result? Plus optical compute has neat features like free Fourier transforms for convents and vision applications.
Re: So, correct me if I'm wrong (Score:2)
CONVNETS I meant.
Re: (Score:2)
Yeah... at least during inference (training will be hard with this with traditional methods). Inference is extremely tolerant of noise. They should be able to use vanishingly short timesteps.
Re: So, correct me if I'm wrong (Score:2)
Noise tolerance in general struck me when I was playing with LLMs on my desktop. They would have these models pruned down with massively reduced bit depths, and they were almost as good as originals. It hints at some kind of vast reducibility to me, but not clear in discrete sense what it would be. But obviously brain is analogue and noisy so it seems promising.
Re:So, correct me if I'm wrong (Score:5, Interesting)
This isn't the first optical computer, nor is it the first one to implement a neural network. There isn't an "ALU" or anything like that.
The easiest way to do it is to take a piece of glass and etch a pattern into it so that when you shine light through it it implements the neural net. For example: https://opg.optica.org/prj/ful... [optica.org]
Those approaches generally have the issue that optics are linear. The real magic of neural networks is that they can solve nonlinear problems, but only if they incorporate nonlinearity. You can do that with special optics, but it's not easy or easily controllable.
You can also make hybrid systems with active optical components. Glancing at Microsoft's paper that seems to be more what they're doing. You use microLEDs to emit light, liquid crystal arrays to manipulate it, and a camera or array of photocells to convert it to electrical signals. You can then use some simple electronics to do things like rectify the signal, then feed that to the next layer of microLEDs.
Re: So, correct me if I'm wrong (Score:2)
Analog computing again? (Score:3)
Well, we did get the miracle of the opamp out of it once, maybe we'll get something useful out of this as well.
Re: (Score:2)
Optical opamp is the first thing I thought too.
Re: Analog computing again? (Score:2)
who knows, there's a so much fascinating phenomena and tech around modern lasers that it won't surprise me in the least if the first usable "quantum computer" is optical.
Re: (Score:2)
who knows, there's a so much fascinating phenomena and tech around modern lasers that it won't surprise me in the least if the first usable "quantum computer" is optical.
I knew the frik'n shark heads would figure into this somehow.
"No, Mr. Shark. I expect you to inference!"
Re: Analog computing again? (Score:2)
WUT?
Re: (Score:2)
Yeah but it's AI now.
AI! you hear!
AI
AI
AI
AI!!!
You want more AI!!!
Re: (Score:3)
Of course I want more AI. I mean, imagine if I wake up one day, there's no AI and I have to actually think! I might get a headache!
That's Astounding (Score:2)
or at least it used to be...
Oh FFS... Enough with the AI plugs already (Score:3)
What next? An AI-power Microsoft mouse pad? AI-powered Microsoft toilet roll holder?
Look, we know you sunk five kajillion dollars in AI and you ain't got nothing to show for it. Quite ramming it down everybody's throats already!
Re: (Score:1)
Tell me about it. AI is the new Crypto. All marketing and no value.
Re: (Score:3)
What next? An AI-power Microsoft mouse pad? AI-powered Microsoft toilet roll holder?
Look, we know you sunk five kajillion dollars in AI and you ain't got nothing to show for it.
I have made some funny audio tracks using AI generated celebrity/politician voices. Surely that was worth the trillion zillion dollars?
Awesome they made Orac (Score:2)
The snarkiest computer ever [youtube.com]
Re:Awesome they made Hex (Score:2)
AI: fixing or creating more problems ??? (Score:2)
Re: (Score:2)
Like social media, hard to put the cat back into the bag !
I'm still trying to catch up on comprehending the technology of 25 years ago! Self-Driving Virtual Reality Quantum cats that you don't even know if they're alive or dead in that bag that you 3-D printed on the Blockchain IoT Cloud.