End of Moore's Law Forcing Radical Innovation 275
dcblogs writes "The technology industry has been coasting along on steady, predictable performance gains, as laid out by Moore's law. But stability and predictability are also the ingredients of complacency and inertia. At this stage, Moore's Law may be more analogous to golden handcuffs than to innovation. With its end in sight, systems makers and governments are being challenged to come up with new materials and architectures. The European Commission has written of a need for 'radical innovation in many computing technologies.' The U.S. National Science Foundation, in a recent budget request, said technologies such as carbon nanotube digital circuits will likely be needed, or perhaps molecular-based approaches, including biologically inspired systems. The slowdown in Moore's Law has already hit high-performance computing. Marc Snir, director of the Mathematics and Computer Science Division at the Argonne National Laboratory, outlined in a series of slides the problem of going below 7nm on chips, and the lack of alternative technologies."
Rock Star coders! (Score:5, Insightful)
The party's over. Get to work on efficient code. As for the rest of all you mothafucking coding wannabes, suck it! Swallow it. Like it! Whatever, just go away.
Re:Rock Star coders! (Score:5, Insightful)
Efficient code and new ways to solve computing problems using massive multi-core solutions.
However many "problems" with performance today are I/O-based and not calculation based. It's time for the storage systems to catch up in performance with the processors, and they are on the way with SSD disks.
Re:Rock Star coders! (Score:5, Interesting)
I think the next couple of decades will be mostly about efficiency. Between mobile computing and the advantage of ever-more cores, the benefits from lower power consumption (and reduce heat load as a result) will be huge. And unlike element size, we're far from basic physical limits on efficiency.
Re: (Score:2)
Re:Rock Star coders! (Score:4, Interesting)
True but misleading. A smaller element has less surface area to dissipate its heat, thus it must either generate less or run hotter. And silicon ran into the upper limits of the material years ago, which is why processors have required active cooling for a long time now. But that has practical limits, so the TDP of new flagship processors stays around the same (100-200 W) and after a few generations their tech gets recycled into a power-optimized model.
In other words, you can't reduce element size without also reducing power usage or the damn thing will melt.
Re: (Score:3)
I think the next couple of decades will be mostly about efficiency. Between mobile computing and the advantage of ever-more cores, the benefits from lower power consumption (and reduce heat load as a result) will be huge. And unlike element size, we're far from basic physical limits on efficiency.
Efficiency in consumer products will have to outweigh greed first.
I never asked for anyone to put a 10-million app capability on my phone, or any of the other 37 now-standard features that suck the life out of my phone battery just by looking at it.
If today's smartphone hardware had to run with functions circa 10 years ago, the batteries would likely last for weeks. Our technology even today is far better than we think. The only thing we're better at, is greed feeding excess.
Re:Rock Star coders! (Score:4, Insightful)
I never asked for anyone to put a 10-million app capability on my phone
Yet you bought a phone with that capability.
or any of the other 37 now-standard features that suck the life out of my phone battery
You can buy a dumb phone with a battery that lasts a week or more, for a lot less than you paid for your smart phone.
The only thing we're better at, is greed feeding excess.
It was silly of you to pay extra for features that you didn't want. It is even sillier to then whine that you were somehow a victim of "greed".
Re: (Score:3)
Re: (Score:2)
You still do not get it. There will be no further computing power revolution.
Re:Rock Star coders! (Score:5, Insightful)
3D chips, memristors, photonics, spintronics, QC (Score:5, Informative)
I see many emerging technologies that promise further great progress in computing. Here are some of them. I wish some industry people here could post some updates about their way to the market. They may not literally prolong the Moore's Law in regards to the number of transistors, but they promise great performance gains, which is what really matters.
3D chips. As materials science and manufacturing precision advances, we will soon have multi-layered (starting at a few layers that Samsung already has, but up to 1000s) or even fully 3D chips with efficient heat dissipation. This would put the components closer together and streamline the close-range interconnects. Also, this increases "computation per rack unit volume", simplifying some space-related aspects of scaling.
Memristors. HP is ready to produce the first memristor chips but delays that for business reasons (how sad is that!) Others are also preparing products. Memristor technology enables a new approach to computing, combining memory and computation in one place. They are also quite fast (competitive with the current RAM) and energy-efficient, which means easier cooling and possible 3D layout.
Photonics. Optical buses are finding their ways into computers, and network hardware manufacturers are looking for ways to perform some basic switching directly with light. Some day these two trends may converge to produce an optical computer chip that would be free from the limitations of electric resistance/heat, EM interference, and could thus operate at a higher clock speed. Would be more energy efficient, too.
Spintronics. Probably further in the future, but potentially very high-density and low-power technology actively developed by IBM, Hynix and a bunch of others. This one would push our computation density and power efficiency limits to another level, as it allows performing some computation using magnetic fields, without electrons actually moving in electrical current (excuse me for my layman understanding).
Quantum computing. This could qualitatively speed up whole classes of tasks, potentially bringing AI and simulation applications to new levels of performance. The only commercial offer so far is Dwave, and it's not a classical QC, but so many labs are working on that, the results are bound to come soon.
Re: 3D chips, memristors, photonics, spintronics, (Score:5, Insightful)
You may see them, but no actual expert in the field does.
- 3D chips are decades old and have never materialized. They do not really solve the interconnect problem either and come with a host of other unsolved problems.
- Memristors do not enable any new approach to computing, as there are neither many problems that would benefit form this approach, nor tools. The whole idea is nonsense at this time. Maybe they will have some future as storage, but not anytime soon.
- Photonics is a dead-end. Copper is far too good and far too cheap in comparison.
- Spintronics is old and has no real potential for ever working at this time.
- Quantum computing is basically a scam perpetrated by some part of the academic community to get funding. It is not even clear whether it is possible for any meaningful size of problem.
So, no. There really is nothing here.
Re: 3D chips, memristors, photonics, spintronics, (Score:4, Insightful)
- 3D chips are decades old and have never materialized.
24-layer flash chips are currently produced [arstechnica.com] by Samsung. IBM works on 3D chip cooling. [ibm.com] Just because it "never materialized" before, doesn't mean it won't happen now.
- Memristors do not enable any new approach to computing, as there are neither many problems that would benefit form this approach, nor tools. The whole idea is nonsense at this time. Maybe they will have some future as storage, but not anytime soon.
Memristors are great for neural network (NN) modelling. MoNETA [bu.edu] is one of the first big neural modelling projects to use memristors for that. I do not consider NNs a magic solution to everything, but you must admit they have plenty of applications in computation-expensive tasks.
And while HP reconsidered its previous plans [wired.com] to offer memristor-based memory by 2014, they still want to ship it by 2018. [theregister.co.uk]
- Photonics is a dead-end. Copper is far too good and far too cheap in comparison.
Maybe fully photonic-based CPUs are way off, but at least for specialized use there are already photonic integrated circuits [wikipedia.org] with hundreds of functions on a chip.
- Spintronics is old and has no real potential for ever working at this time.
MRAM [wikipedia.org] uses electron spin to store data and is coming to market. Application of spintronics for general computing may be a bit further off in the future, but "no potential" is an overstatement.
- Quantum computing is basically a scam perpetrated by some part of the academic community to get funding. It is not even clear whether it is possible for any meaningful size of problem.
NASA, Google [livescience.com] and NSA [bbc.co.uk], among others, think otherwise.
So, no. There really is nothing here.
I respectfully disagree. We definitely have something.
Re: (Score:3)
I respectfully disagree. We definitely have something.
That there's research into exotic alternatives is fine, but just because they've researched flying cars and fusion reactors for 50 years doesn't mean it will ever matrialize or be usable outside a very narrow niche. If we hit the limits of copper there's no telling if any of these will materialize or just continue to be interesting, but overall uneconomical and impractical to use in consumer products. Like for example supersonic flight, it exists but all commercial passengers go on subsonic flights since th
Re: 3D chips, memristors, photonics, spintronics, (Score:4, Interesting)
It's true that we may not see another 90s-style MHz race on our desktops. But there is ongoing need for faster, bigger, better supercomputers and datacenters, and there is technology that can help there. I did quote some examples where this technology is touching the market already. And once it is adopted and refined by the government agencies and big data companies, it will also trickle down into consumer market.
I/O will get much faster. Storage will get much bigger. Computing cores may still become faster or more energy-efficient. New specialized co-processors may become common, for example for NN or QC. Then some of them may get integrated, as it happened to FPUs and GPUs. So the computing will most likely improve in different ways than before, but it is still going to develop fast and remain exciting.
And some technology may stay out of the consumer market, similar to your supersonic flight example, but it will still benefit the society.
Re: 3D chips, memristors, photonics, spintronics, (Score:4, Funny)
In conclusion, you're right. There's no chance of any revolutionary computing technology coming forward, and there's no chance that humans will ever fly.
Re: 3D chips, memristors, photonics, spintronics, (Score:4, Informative)
I looked up some companies by name (too bad you posted as AC and didn't mention them), and here is what I found:
Intel reveals a neuromorphic chip design based on memristors and spintronics [technologyreview.com]
HP and Hynix postpone memristor-based memory to avoid cannibalizing their flash business [xbitlabs.com]
This pearl deserves to be quoted:
"In terms of commercialization, we will have something technologically viable by the end of next year. Our partner, Hynix, is a major producer of flash memory, and memristors will cannibalize its existing business by replacing some flash memory with a different technology. So the way we time the introduction of memristors turns out to be important," said Stan Williams, Hewlett-Packard senior fellow and director of the company's cognitive systems laboratory, during a conversation at the Kavli Foundation.
SanDisk and Toshiba are testing a ReRAM (memristor memory) chip [theinquirer.net]
HP working with AMD, Intel, ARM and others to release memristor-based "nanostores" [thinlinedata.com].
A working memristor has already been proven in the lab by HP and they are now working with AMD, Intel, ARM and others to release what they call "nanostores". A chip that combines the memristor and logic of the CPU can prove to replace all current microprocessors and memory architectures.
A startup named "Crossbar" will try to beat HP to market with memristor-based ReRAM. [crossbar-inc.com]
Re: (Score:2)
Indeed. Just my point. And that evolution is going slower and slower.
Re: (Score:3)
Amen. The most used CPU architectures in the world today are directly descended from microcontroller architectures designed in the late 1960s and early 1970s, based on the work of a handful of designers. None of those designers could have planned for or envisaged their chips as being the widely used CPUs of today.
Re:Rock Star coders! (Score:5, Interesting)
There was an article not too long ago (can't remember where) that mentioned that a lot of the performance improvement over the years came from better algorithms rather than faster chips (e.g. one can double the processor speed but that pales with changing an O(n**2) algorithm to O(n*log(n)) one).
SSD's based on flash aren't the ultimate answer. Ones that use either magneto-resistive memory or ferroelectric memory show more long term promise (e.g. mram can switch as fast as L2 cache--faster than DRAM but with the same cell size). With near unlimited memory at that speed, a number of multistep operations can be converted to a single table lookup. This is done a lot in a lot of custom logic where the logic is replaced with a fast SRAM/LUT.
Storage systems (e.g. NAS/SAN) can be parallelized but the limiting factor is still memory bus bandwidth [even with many parallel memory buses].
Multicore chips that use N-way mesh topologies might also help. Data is communicated via a data channel that doesn't need to dump to an intermediate shared buffer.
Or hybrid cells that have a CPU but also have programmable custom logic attached directly. That is, part of the algorithm gets compiled to RTL that can then be loaded into the custom logic just as fast as a task switch (e.g. on every OS reschedule). This is why realtime video encoders use FPGAs. They can encode video at 30-120 fps in real time, but a multicore software solution might be 100x slower.
Re: (Score:3, Insightful)
(e.g. one can double the processor speed but that pales with changing an O(n**2) algorithm to O(n*log(n)) one).
In some cases. There are also a lot of cases where overly complex functions are used to manage lists that usually contains three or four items and never reaches ten.
Analytical optimizing is great when it can be applied but just because one has more than one data item to work on doesn't automatically mean that a O(n*log(n)) will beat a O(n**2) solution. The O(n**2) solutions will often be faster per iteration so it is a good idea to consider how many items one usually will work with.
Re:Rock Star coders! (Score:4, Insightful)
Meh, that's a copout. IO has always been a bottleneck. Why do you think that Knuth spends so much time optimizing sorting algorithms for tapes? It's not a new issue, solve it by changing your algorithm (aka calculation).
The current generation of programmers are so used to doing cookie cutter work, gluing together lower level libraries that they do not understand in essentially trivial ways, that when they are faced with an actual mismatch between the problem and the assumptions of the lower level code, there is nothing they know how to do. Here's a hint: throw away the lower level crutches, and design a scalable solution from scratch. Most problems can be solved in many ways, and the solution that uses your favourite framework is probably never the most efficient.
Moore's Law isnt a law you know (Score:5, Insightful)
Its more of a prediction, that has mostly been on target cause of its challenging nature
Re: (Score:3, Funny)
now now don't you go spreading propaganda that laws aint laws... next there will be idiots coming out of the woodwork claiming that einstein may have got it wrong and that it's ok to take a dump whilst being cavity searched by the police
Moore's "law" & AI (Score:3, Interesting)
In my mind it was an interesting statistical coincedence, *when it was first discussed*
Then the hype took over, and we know what happens when tech and hype meet up...
Out of touch CEO's get hair-brained ideas from non-tech marketing people about what makes a product sell, then the marketing people dictate to the product managers what benchmarks they have to hit...then the new product is developed and any regular /. reader knows the rest.
It's bunk. We need to dispel these kinds of errors in language instead o
Re: (Score:2)
We can now do stupid things more quickly and in vaster quantity?
Blind ants, now need to search more branches (Score:4, Insightful)
Now the blind ants (researchers) will need to explore more of the tree (the computing problem space)... there are many fruits out there yet to discover, this is just the end of the very easy fruit. I happen to believe that FPGAs can be made much more powerful because of some premature optimization. Time will tell if I'm right or wrong.
Re: Blind ants, now need to search more branches (Score:2)
So true. I also happen to believe that adding an FPGA coprocessor to general purpose CPUs, that applications could reconfigure on the fly to perform certain tasks, could lead to massive increases in performance.
Re: Blind ants, now need to search more branches (Score:5, Interesting)
As somebody that has watched what has been going on in that particular area for more than 2 decades, I do not expect anything to come out of it. FPGAs are suitable for doing very simples things reasonably fast, but so are graphics cards and with a much better interface. Bit as soon as communication between computing elements or large memory is required, both FPGAs and graphics cards become abysmally slow in comparison to modern CPUs. That is not going to change, as it is an effect of the architecture. There will not be any "massive" performance increase anywhere now.
Re: Blind ants, now need to search more branches (Score:4, Interesting)
Programming FPGAs is far more complex than programming GPGPUs and you would need a huge FPGA to match the compute performance available on $500 GPUs today. FPGAs are nice for arbitrary logic such as switch fabric in large routers or massively pipelined computations in software-defined radios but for general-purpose computations, GPGPU is a much cheaper and simpler option that is already available on many modern CPUs and SoCs.
Re: Blind ants, now need to search more branches (Score:5, Informative)
Then a GPU will typically beat an FPGA solution. There's a pretty large problem space for which GPUs suck. If you have memory access that is predictable but doesn't fit the stride models that a GPU is designed for then an FPGA with a well-designed memory interface and a tenth of the arithmetic performance of the GPU will easily run faster. If you have something where you have a long sequence of operations that map well to a dataflow processor, then an FPGA-based implementation can also be faster, especially if you have a lot of branching.
Neither is a panacea, but saying a GPU is always faster and cheaper than an FPGA makes as much sense as saying that a GPU is always faster and cheaper than a general-purpose CPU.
Re: (Score:2)
FPGAs are relatively expensive compared to graphics chips, actually most chips. Its still not clear what FPGAs can accomplish in a general computing platform that will be of value, considering the other lower cost options available.
The other half of the problem here is that, in comparison to GPU programming interfaces such as OpenCL and Cuda, there is relatively little effort in bringing FPGA development beyond ASIC-Lite. The tool chains and development processes (what FPGA vendors like to call "design fl
Re: (Score:3)
All of this points out what I'm saying... they've optimized for small(ish) systems that have to run very quickly, with a heavy emphasis on "routing fabric" internally. This makes them hard to program, as they are heterogeneous as all get out.
Imagine a grid of logic cells, a nice big, homogenous grid, that was symmetric. You could route programs in it almost instantly, there's be no need for custom tools to program it.
The problem IS the routing grid... it's a premature optimization. And for big scale stuff
Re: (Score:2)
The Zynq 7100 is 4 grand at digikey.
Re:Blind ants, now need to search more branches (Score:4, Funny)
just need to shoot more advanced alien spaceships down near roswell
Ends of Moore's Law in software ? (Score:5, Insightful)
The really sad thing regarding this "Moore's Law" thing is that, while the hardware had kept on getting faster and even more power efficient, the software that runs on them kept on becoming more and more bloated.
Back in the days of pre-8088 we already had music notation softwares running on Radio Shack TRS-80 model III.
Back then, due to the constraints of the hardware, programmers had to use every trick on the book (and off) to make their programs run.
Nowadays, even the most basic "Hello World" program comes up in megabyte range.
Sigh !
Re:Ends of Moore's Law in software ? (Score:5, Insightful)
Re: Ends of Moore's Law in software ? (Score:2)
The sad part is not that it's easier to code, but that many people have grown complacent with their code executing at abysmal performance rates. And I also blame compilers/interpreters that don't mind bloating things up.
Re: (Score:2)
1. Is hello world easier to code now? When I had an a bit the program was
(type)
print "Hello world" (hit enter)
That didn't seem hard, now - what steps to you have to go through on a modern PC you've just bought. Not as easy is it.
2. Agree with other poster, why are compilers throwing 99.999% of redundant code in to the software? Pathetic.
Re: (Score:2)
Where it becomes more important is in very large scale systems.. supporting a few dozen,
Re: (Score:3)
Where it becomes more important is in very large scale systems.. supporting a few dozen, or even a few hundred users at a time is pretty easy to do with even modest hardware today.
Yeah, you say "support users", but supporting users *well* can be more than enough computationally expensive to warrant diligent programing. Think automated knowledge systems, for example.
Re: Ends of Moore's Law in software ? (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Apparently you've been under a rock for the last 5 years and completely missed the move to virtualization. Can you guess what the main business driver is? More efficient use of computing resources, power, and data centre infrstructure resulting from consolidation.
Re: (Score:2)
Back in the days of pre-8088 we already had music notation softwares running on Radio Shack TRS-80 model III.
In 4-bit color on a 640x480 screen, with ugly fonts (if you could even get more than one!) and lousy, cheap sounding midi playback. Seriously, TRS-80 music notation software was severely limited compared to what we have today.
Re: (Score:3)
Nowadays, even the most basic "Hello World" program comes up in megabyte range.
The most basic "Hello World" program doesn't have a GUI (if it has a GUI, you can make it more basic by just printing with printf), so let's see:
I'm not sure what "others" is, but I suspect there's a bug there (I'll take a look). 4K text, 4K data (that's the pag
Re: (Score:2)
Re: (Score:2)
Re: (Score:3)
This guy [timelessname.com] managed to get it into 145 bytes (142 on his website, but he printed "Hi World" instead of "Hello world") with no external dependencies.
The smallest ELF executable I've seen is this 45 byte example [muppetlabs.com]. It doesn't print anything and it violates the ELF standard, but Linux (or at least his version) is still willing to execute it.
That said, there isn't much point in optimizing away libc except as an academic exercise. Yes, it's a few megabytes in size, but it's shared across every running userspace pr
Re: (Score:2)
What do you suggest we do with all the computing power we've gained then? It seems perfectly reasonable to use it to make software development easier and faster, and make more beautiful and more usable user interfaces.
Re: Ends of Moore's Law in software ? (Score:2)
Re: (Score:3)
There have been several occasions where I've seen a team "solve" a problem by throwing another couple gigabytes at a Java VM and add a task to reboot the system every couple of days. I've lost count of the times where simply optimiz
Re: (Score:3)
But that's exactly the point... For decades, Moore's Law has made it seem like a crime to optimize software. No matter how inefficient the software, hardware was cheaper than developer time, and the next generation of hardware would be fast enough to run the bloated, inefficient crap.
The end of Moore's Law... if it actually happens this time, unlike the last 1,000 times it was predicted... will mean good people who
Re:Ends of Moore's Law in software ? (Score:5, Insightful)
You can still write software that efficient today. The down side is that you can only write software that efficient if you're willing to have it be about that complex too. Do you want your notes application to just store data directly on a single disk from a single manufacturer, or would you rather have an OS that abstracts the details of the device and provides a filesystem? Do you want the notes software to just dump the contents of memory, or do you want it to store things in a file format that is amenable to other programs reading it? Do you want it to just handle plain text for lyricst, or would you like it to handle formatting? What about unicode? Do you want it to be able to render the text in a nice clean antialiased way with proportional spacing, or are you happy with fixed-width bitmap fonts (which may or may not look terrible, depending on your display resolution)? The same applies to the notes themselves. Do you want it to be able to produce PostScript for high-quality printing, or are you happy for it to just dump the low-quality screen version as a bitmap? Do you want it to do wavetable-based MIDI synthesis or are you happy with just beeps?
The reason modern software is bigger is that it does a hell of a lot more. If you spent as much effort on every line of code in a program with all of the features that modern users expect as you did in something where you could put the printouts of the entire codebase on your office wall, you'd never be finished and you'd never find customers willing to pay the amount it would cost.
Re:Ends of Moore's Law in software ? (Score:5, Insightful)
Would you rather that your CPU and memory were always underutilized by software, going to waste?
yes more efficient and fast code would be much better
Re: (Score:3)
Re: (Score:3)
lets look at that, will more computers make a mcaffy or norten security suits run appreciably faster? No because they are badly written programs that are resource hogs the eat processing power and can pull a workstation to a crawl. A faster more effencent and less resource hogging code would be better in this case then throwing hardawre at it. For many problems throwing hardware at a problem is just like throw money at a problem you are ignoring the root issue and will eventually run out of resources to thr
Re:Ends of Moore's Law in software ? (Score:5, Funny)
I like this guy. He doesn't stop for punctuation.
Re:Ends of Moore's Law in software ? (Score:5, Insightful)
Would you rather that your CPU and memory were always underutilized by software, going to waste?
yes more efficient and fast code would be much better
Then you should be using a 20 years old computer, with its lean software and scarce resources. Why buy more powerful hardware if you have no use for its inward capabilities? The rest of us will keep using hardware that allow possibilities that were unheard a few years ago.
To quote an old adage What Moore's law giveth Gates Taketh away
I would prefer to use lean software on powerful hardware so as to actually gain the advantages of said hardware rather than bad code and bloat roll back the advantages new hardware has given.
Re: (Score:2)
No I would use it to do whatever I am trying to do more quickly. Have you ever have to wait for a program to compile, a video to transcode, a archive compress, or something as simple as wait for a document to render. For example there was a bug a while back in open/libre office that caused .rtf documents to take a long time to render. Worst part was the time to render grew exponentially with the length of the document, my first encounter with it was when I opened a ebook saved that format it took over ten m
Re:Ends of Moore's Law in software ? (Score:4, Insightful)
Pointless cycles because of poor code and compiler optimizations is hardly what I would call "utilization".
Re:Ends of Moore's Law in software ? (Score:4, Interesting)
Would you rather that your CPU and memory were always underutilized by software, going to waste?
Of course, because then we would either save in power consumption or alternatively do more interesting stuff with the extra free resources that we get.
Re: (Score:2)
If you use utf8, then unicode is as efficient as ascii...for everything you can do with ascii. Nobody should be required to use utf32 except when it's the most convenient choice. (And there's never a good argument for using utf16, unless you're on a computer with 16 bit words.)
Re: (Score:2)
Re: (Score:2)
The text contains 100 characters.
How much memory should I allocate for UTF8, without wasting memory?
Re: (Score:2)
Where did the text come from? You had to read it from somewhere, so you know what it contains already.
Re: (Score:2)
The text contains 100 characters.
How much memory should I allocate for UTF8, without wasting memory?
Beginner's question. The text contains 100 bytes, how much memory should you allocate? You rarely care how many characters there are.
Re: (Score:2)
If you use utf8, then unicode is as efficient as ascii.
For storage.
But when you actually come to display stuff rendering engines that can handle massive character sets, variable byte count encodings of code points, converting mixed directionality text from logical order to physical order, combining elements in various ways to apply arbitary diacritics to a character and so-on doesn't come free. Having those things in your text rendering engine comes at a price even if you don't actually use them.
Re: Ends of Moore's Law in software ? (Score:4, Insightful)
How about spending 20x the man hours for a 10,000% performance gain? That is what I've recently experienced myself, in the reverse: an embedded device interface getting rewritten to require 20x less man hours to mantain... at a 100x performance hit. Suffice to say it went from quite snappy, to completely useless, but it seems like it's my fault for not upgrading the hardware.
And best of all... (Score:5, Insightful)
We might even stop writing everything in Javascript?
Re:And best of all... (Score:5, Funny)
We might even stop writing everything in Javascript?
Indeed. JavaScript is the assembly language of the future, and we need to stop coding in it. There already are many nicer languages which are then compiled into Javascript, ready for execution in any computing environment.
Re: (Score:3, Insightful)
We might even stop writing everything in Javascript?
Indeed. JavaScript is the assembly language of the future, and we need to stop coding in it. There already are many nicer languages which are then compiled into Javascript, ready for execution in any computing environment.
You were modded insightful rather than funny. I weep for the future.
Re: (Score:2)
Mmm. Skynet. (Score:3)
..took us in directions we hadn't considered.
Forget the exact quote, but what a time to be alive. My first computer program was written on a Vic-20. Watching the industry grow has been incredible.. I am not worried about the demise of traditional lithographic techniques.. I'm actually expecting the next generation to provide a leap in speed as now there's a strong incentive to look at different technologies.
Here's to yet another generation of cheap CPU.
Imagine that... (Score:2)
feature bottlenck (Score:2)
I agree in sprit but you perpetuate a false dichotomy based on a misunderstanding of *why* software bloat happens, in the broad industry-wide context.
Just look at Windows. M$ bottlenecked features willfully because it was part of their business plan.
Coders, the people who actually write the software, have always been up to the efficiency challenge. The problem is the biggest money wasn't pay
Software improvements matter more than hardware (Score:4, Interesting)
This is ok. For many purposes, software improvements in terms of new algorithms that are faster and use less memory have done more for heavy-dute computation than hardware improvement has. Between 1988 and 2003, linear programmng on a standard benchmark improved by a factor of about 40 million. Out of that improvement, about 40,000 was from improvements in software and only about 1000 in hardware improvements (these numbers are partially not well-defined because there's some interaction between how one optimizes software for hardware and the reverse). See this report http://www.whitehouse.gov/sites/default/files/microsites/ostp/pcast-nitrd-report-2010.pdf [whitehouse.gov]. Similar remarks apply to integer factorization and a variety of other important problems.
The other important issue related to this, is that improvements in algorithms provide ever-growing returns because they can actually improve on the asymptotics, whereas any hardware improvement is a single event. And for many practical algorithms, asymptotic improvements are occurring still. Just a few days ago a new algorithm was published that was much more efficient for approximating max cut on undirected graphs. See http://arxiv.org/abs/1304.2338 [arxiv.org].
If all forms of hardware improvement stopped today, there would still be massive improvement in the next few years on what we can do with computers simply from the algorithms and software improvements.
Re:Software improvements matter LESS than hardware (Score:2)
Misleading.
Yes, I've got 100 fold improvments on a single image processing algorithm. It was pretty easy as well.
However, that only speeds up that one algorithm, 10x faster hardware speeds everything 10x.
Use of interpretted languages and bloated code has more than equalled the point gains in algorithms.
The net result overall is that 'performance' increase has been mostly due to hardware, not software.
Re: (Score:3, Interesting)
Between 1988 and 2003, linear programmng on a standard benchmark improved by a factor of about 40 million. Out of that improvement, about 40,000 was from improvements in software and only about 1000 in hardware improvements (these numbers are partially not well-defined because there's some interaction between how one optimizes software for hardware and the reverse).
I downloaded the report at the link that you have so generously provided -http://www.whitehouse.gov/sites/default/files/microsites/ostp/pcast-nitrd-report-2010.pdf - but I found the figures somewhat misleading
In the field of numerical algorithms, however, the improvement can be quantified. Here is just one example, provided by Professor Martin GrÃtschel of Konrad-Zuse-Zentrum für Informationstechnik Berlin. GrÃtschel, an expert in optimization, observes that a benchmark production planning model solved using linear programming would have taken 82 years to solve in 1988, using the computers and the linear programming algorithms of the day. Fifteen years later â" in 2003 â" this same model could be solved in roughly 1 minute, an improvement by a factor of roughly 43 million. Of this, a factor of roughly 1,000 was due to increased processor speed, whereas a factor of roughly 43,000 was due to improvements in algo-rithms! GrÃtschel also cites an algorithmic improvement of roughly 30,000 for mixed integer programming between 1991 and 2008
Professor GrÃtschel's sighting was in regard of "numerical algorithm", and no doubt, there have been some great improvements achieved due to new algorithms. But that is just one tiny aspect out of the whole spectrum of the programming scene.
Out of the tiny segment of the numerical crunching, blo
Implications (Score:4, Interesting)
Some implications:
Re: (Score:2)
Some implications:
Looks like you had a parallelism problem right there!
Re: (Score:2)
What part of "we can't squeeze any more transistors onto a chip" do you not understand?
If you drop the requirement to have them all work together on a single problem in a coordinated fashion, you can squeeze more on. For example, you can use larger chips (duh!) and have a way to cope with some of the cores being non-functional due to manufacturing flaws (as the number of defects per wafer is approximately constant). But that's a very different way of working.
dumbest thing I've read all day (Score:2)
Seriously? It's like, people wake up and say, "it would be such a blessing if I could never get a faster computer." Does that make sense at all?
Re: (Score:2)
Tell that to the XP holdouts.
No newer technology means no change and using the best ever made soley because they are familiar.
Re: (Score:2)
With Moore's law, we're talking about faster processors. No changes necessary, other than your motherboard. I've never met anyone who was in love with their motherboard.
Re: (Score:2)
The intent behind that sentence seems fairly clear, that the end of predictable speed increases may lead to greater focus on whole other avenues of development and other unpredictable and exciting ideas popping up.
Re: (Score:2)
The intent behind that sentence seems fairly clear, that the end of predictable speed increases may lead to greater focus on whole other avenues of development and other unpredictable and exciting ideas popping up.
Yes, why don't we go back to the abacus, and see what new ideas come up!
Seriously, do you remember when microcomputers came out? Academics complained that they were setting the computer world back three decades. That's basically how you sound.
Re: (Score:2)
Yes, why don't we go back to the abacus, and see what new ideas come up!.
Logical fallacy, nobody is suggesting going back. If you want to take abacus as example, then what you are saying is, let's just keep adding beads and rows to our abacuses, and come up with ingenious bead sliding mechanisms to allow faster and faster movement of beads.
Now we're soon hitting the diminishing returns limit of this road, need to start inventing something else to make calculations faster, and perhaps open entire new ways of doing calculations, analogous to what you can do with a slide rule.
Feature size vs "Feature size" (Score:4, Interesting)
The defining characteristic of the 7nm is that it's the one after the 10nm node. I can't remember the last time I worked in a process where the was a notable dimension that matched the node name, either drawn or effective.
Marc Snir gets bogged down in an analysis of gate length reduction which is quite besides the point. If it gets harder to shrink the gate than to do something else, then something else will be done. It worked on processes with the same gate length as the "previous" process, and I've probably even worked on a process that had a larger gate than the previous process. The device density still increased, since gate length is not the only dimension.
Goovernments? Nonsense! (Score:2)
If, just if anything can be done, Governments will not play any significant role in this process. They do not even seem to understand the problem, how could they ever be part of the solution? And that is just it: It is quite possible that there is no solution, or that it may take decades or centuries for that one smart person to be in the right place at the right time. Other than blanket research funding, Governments cannot do anything to help that. Instead, scientific funding is today only given to concret
Slides were kind of cool (Score:2)
well then... (Score:2)
What's old is new again. (Score:2)
Perhaps writing efficient code will come back into style.
The Way Ahead (Score:2)
Time to revisit architecture... (Score:3)
The refinement of process has postponed this for a long while, but the time has come to explore new architectures and technologies. The Mill architecture [ootbcomp.com] is one such example, and aims to bridge the enormous chasm of inefficiency between general purpose CPUs and DSPs. Conservatively, they are expecting a tenfold improvement in performance/W/$ on general purpose code, but the architecture is also well suited to wide MIMD and SIMD.
Another area ripe for innovation is memory technologies, which have suffered a similar stagnation limited to refinement of an ancient technology. The density of both cache and main memory can be significantly improved on the same process with Thyristor-RAM or Z-RAM. Considering the potential benefits and huge markets, it is vexing that more resources aren't expended toward commercializing better technologies. Some of the newer technologies also scale down better.
Something to replace the garbage which is NAND flash would also be welcome, yet sadly there appears to be no hurry there either. One point is certain, there is a desperate need to find a way to commercialize better technologies rather than perpetually refining inferior ones. Though examples abound, perhaps none is more urgent than the Liquid fluoride thorium reactor [wikipedia.org]. Molten salt reactors could rapidly replace fossil fuels with clean and abundant energy while minimizing environmental impact, and affordable energy is the basis for all prosperity.
Re:Anonymous Coward's Law (Score:5, Funny)
Moore's yawn ... er, law. It has ended, again again. It must be the co-joined twin of Voyager which has left the solar system 78 times in the past 14 years.
Wake me up when some real news gets in.
Re: (Score:2)
I thought they finally killed that off last month. Or the other 7K times it was declared dead.
Re: (Score:3)
Re: (Score:2)