Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Japan Hardware IT Technology

Japan Eyes World's Fastest-Known Supercomputer, To Spend Over $150M On It (reuters.com) 35

Japan plans to build the world's fastest-known supercomputer in a bid to arm the country's manufacturers with a platform for research that could help them develop and improve driverless cars, robotics and medical diagnostics. From a Reuters report: The Ministry of Economy, Trade and Industry will spend 19.5 billion yen ($173 million) on the previously unreported project, a budget breakdown shows, as part of a government policy to get back Japan's mojo in the world of technology. The country has lost its edge in many electronic fields amid intensifying competition from South Korea and China, home to the world's current best-performing machine. In a move that is expected to vault Japan to the top of the supercomputing heap, its engineers will be tasked with building a machine that can make 130 quadrillion calculations per second -- or 130 petaflops in scientific parlance -- as early as next year, sources involved in the project told Reuters. At that speed, Japan's computer would be ahead of China's Sunway Taihulight that is capable of 93 petaflops. "As far as we know, there is nothing out there that is as fast," said Satoshi Sekiguchi, a director general at Japan's âZNational Institute of Advanced Industrial Science and Technology, where the computer will be built.
This discussion has been archived. No new comments can be posted.

Japan Eyes World's Fastest-Known Supercomputer, To Spend Over $150M On It

Comments Filter:
  • by Anonymous Coward

    Does it meet the minimum system requirements for VR?

  • So what do you do with all the old supercomputers when they're too big/power hungry vs performance? Helluva paperweight.

    • by nairnr ( 314138 )

      You can upgrade them. It isn't like these are monolithic computers. You can upgrade network capabilites, change processors.

    • Like all things, there is a smashing point where improved energy efficiency pays the cost of an upgrade if performance needs are flat. When performance needs exceed the incremental upgrade options, you either settle for longer processing time, simplified process, or pay for a faster system.
    • So what do you do with all the old supercomputers when they're too big/power hungry vs performance? Helluva paperweight.

      If they're anything like me, they'll put it on a shelf in their basement.

      25 years later, they'll try turning it on again just for fun. One of four things might happen, with roughly equal probability:

      • 1. It works fine
      • 2. It boots, but acts really flaky with strange characters on the display
      • 3. It's a brick
      • 4. Some component goes up in a puff of smoke; goto 3
  • by Shane_Optima ( 4414539 ) on Friday November 25, 2016 @04:25PM (#53361227) Journal
    It's strange how little people talk about the fact that we're approaching the end of this little plateau in the first silicon revolution. I'm sure this supercomputer is fast and all, but compared to what's economically feasible once the barriers are removed?

    The second silicon revolution will be marked by price freefall as soon as enough patents expire and enough high-output[1] factories can come on-line. Eventually (at the most, perhaps a few decades from now), a major world government will realize that if they buy their own factories, keep 'em cranking out single board machines and flash memory at full speed 24/7 and if need be use some superfluous power plant that was about to be decommissioned... they could build a supercomputer the likes of which the world has never seen. Maybe someone more knowledgeable than myself could do some back of the napkin estimates here about what should be possible?

    As an interesting aside, the power supply, display and input devices will end up becoming the most expensive parts of most consumer electronics, but I think the more interesting question is what the hell are they going to do with all of that computing power once the price floors give way? Protein folding, cryptography... and general AI.


    1. The "high output" bit being the kicker. I know very little of the details of chip lithography, so maybe there are hangups here I'm unaware of.
    • The problem is that microchip foundries and dies are massive investments. Still, for a major world government, sinking $10 billion into a foundry wouldn't be an issue, especially since silicon looks to be bottoming out.
      • The problem is that microchip foundries and dies are massive investments. Still, for a major world government, sinking $10 billion into a foundry wouldn't be an issue, especially since silicon looks to be bottoming out.

        Yes, but it's a problem that will *eventually* go away. The per-unit costs are just way too small, and the potential upside is way too large for it to not happen in our lifetimes[1]. A $10 billion cost is on the order of the LHC, but unlike the LHC such a project would offer massive ongoing practical benefits in addition to theoretical research potential.


        1. [waiting for the "I'm 73, you insensitive clod!" remark]

    • I had similar ideas the other day. The real problem here is that you eventually hit a power wall as you continue to deploy new stuff en masse. Nevertheless, at least certain types of computing could constitute flexible demand, so you might want to power them with any extra generation you have at the moment.
      • I had similar ideas the other day. The real problem here is that you eventually hit a power wall as you continue to deploy new stuff en masse.

        That's why I explicitly mentioned superfluous power plants. There was just a story the other day about coal plants closing. Well... keep one online. How many flops can a single, large coal plant going full-tilt give you? I bet it's a lot. I bet the bottleneck, from a super-project perspective (not necessarily home user), isn't going to be the electricity.

        • I thought you meant the manufacturing part. I had the operations more in mind. What I thought of was something similar to DESERTEC, only without long-distance power transmission and with on-site computation instead. Just screw a computing module on the back of each solar panel and then fiber it up. You could power stuff with coal from a central location but that way, you'd never get rid of the dirty stuff.
      • I had similar ideas the other day. The real problem here is that you eventually hit a power wall as you continue to deploy new stuff en masse. Nevertheless, at least certain types of computing could constitute flexible demand, so you might want to power them with any extra generation you have at the moment.

        It's also worth noting this doesn't prevent an explosion of dirt-cheap flash memory from happening at some point which has its own set of interesting implications. Particularly if high quality video sensors also fall in price...

    • You don't achieve AI by just throwing processor power at it. Cost isn't a concern here. If it cost $400 billion to achieve AI someone would spend that.
      • Given that AI, particularly the sort of radically game-changing stuff we're likely to care about, is almost certainly going to be formed through self-modification and testing, it's hard to see how massively increased CPU power isn't going to make this easier. Also, I mentioned flash memory for a reason; storage should fall in price along with processing power. Being able to predict what humans do or say next (in text, audio or video) seems crucial here. This isn't about Deep Blue so much as Watson. Neither
  • Probably just want to explore all of No Man's Sky [wikipedia.org] ...

  • But can it play Doom?
    ---
    Asking the important questions since 2016
  • WTF is ÃZNational?

  • That's hardly pocket-change, but it seems cheap for what would be the world's fastest-known supercomputer.

    For comparison, Tianhe-2 [techtimes.com] (in number 2 spot) cost about $390M to build, and Sunway TaihuLight [wikipedia.org], the current number 1, went live in June of this year at a cost of $273M.

    • which are about to get a whole lot cheaper next year when AMD launches their Zen CPUs. Assuming AMD isn't just lying about price/performance (possible) for the first time in 10 years they're going to be competitive with Intel, even at clocks per watt. That'll have a huge impact on the cost of CPUs. Already leaks indicate they'll be pricing at about 1/2 Intel's i7 for an equivalent CPU.
  • Paying nine figures hoping for 2019 or so. With mulriple countries chasing the exaflop, so good ideas may come out of it.

"And remember: Evil will always prevail, because Good is dumb." -- Spaceballs

Working...