MrSeb writes "If you've gone shopping for a power supply any time over the last few years, you've probably noticed the explosive proliferation of various 80 Plus ratings. As initially conceived, an 80 Plus certification was a way for PSU manufacturers to validate that their power supply units were at least 80% efficient at 25%, 50%, 75%, and 100% of full load. In the pre-80 Plus days, PSU prices normally clustered around a given wattage output. The advent of the various 80 Plus levels has created a second variable that can have a significant impact on unit price. This leads us to three important questions: How much power can you save by moving to a higher-efficiency supply, what's the premium of doing so, and how long does it take to make back your initial investment?"
DEAL: For $25 - Add A Second Phone Number To Your Smartphone for life! Use promo code SLASHDOT25. Check out the new SourceForge HTML5 Internet speed test! ×
An anonymous reader writes "Shocking Kickstarter news this morning, not only did I actually I receive my Brydge this morning, but a Kickstarter software project shipped on time! Connectify Dispatch, the load balancing software for Windows, was released today as well. Perhaps the Kickstarter model of funding technology is not nearly as doomed as some naysayers here would have it. Why are so many here hostile to crowdsourcing? Shouldn't we be glad to have Venture Capitalists cut out of the loop so that companies actually listen to us?"
First time accepted submitter wisewellies writes "Ben clearly has way too much spare time on his hands, but he decided to see just how well an antiquated ZX Spectrum would hold up to modern EMC requirements. His blog is a good read if you're looking for something to do while pretending to work! From the blog: 'This year is the 30th anniversary of one of my favourite inventions of all time, the Sinclair ZX Spectrum. A few weeks ago, I finally bought one: a non-working one on eBay that I nursed back to health. Fortunately there was very little wrong with it. Unfortunately it's a 16K model, and a fairly early one at that, which won't run much software in its native state. This probably accounts for its unusually pristine condition. We took half an hour in the chamber to perform an approximate series of EN55022 measurements, to check its radiated emissions against today's standard. The question is, what have we learned as an industry since 1982?'"
Bruce66423 writes "Eric Schmidt said that a £2.5 billion tax avoidance 'is called capitalism' and seems totally unrepentant. He added, 'I am very proud of the structure that we set up. We did it based on the incentives that the governments offered us to operate.' One must admit to being impressed by his honesty." Schmidt also says that if you want a job in the future you'll have to learn to "outrace the robots," and that Google Fiber is the most interesting project they have going.
angry tapir writes "As part of a $1 billion upgrade of its city campus, the University of Technology, Sydney is installing an underground automated storage and retrieval system (ASRS) for its library collection. The ASRS is in response to the need to house a growing collection and free up physical space for the new 'library of the future', which is to open in 2015 to 2016, so that people can be at the center of the library rather than the books. The ASRS, which will connect to the new library, consists of six 15-meter high robotic cranes that operate bins filled with books. When an item is being stored or retrieved, the bins will move up and down aisles as well as to and from the library. Items will be stored in bins based on their spine heights. About 900,000 items will be stored underground, starting with 60 per cent of the library's collection and rising to 80 per cent. About 250,000 items purchased from the last 10 years will be on open shelves in the library. As items age, they will be relegated to the underground storage facility. The University of Chicago has invested in a similar system."
crookedvulture writes "AMD is bundling a stack of the latest games with graphics cards like its Radeon HD 7950. One might expect the Radeon to perform well in those games, and it does. Sort of. The Radeon posts high FPS numbers, the metric commonly used to measure graphics performance. However, it doesn't feel quite as smooth as the competing Nvidia solution, which actually scores lower on the FPS scale. This comparison of the Radeon HD 7950 and GeForce 660 Ti takes a closer look at individual frame latencies to explain why. Turns out the Radeon suffers from frequent, measurable latency spikes that noticeably disrupt the smoothness of animation without lowering the FPS average substantially. This trait spans multiple games, cards, and operating systems, and it's 'raised some alarms' internally at AMD. Looks like Radeons may have problems with smooth frame delivery in new games despite boasting competitive FPS averages."
skade88 writes "LG has released an ultra wide monitor. It really is wide (WxHxD: 699.7 X 387 X 208.5 mm) — take a look at the thing! It looks like it would be good for movies shot in larger aspect ratios such as 2.20 for 70mm film or 2.39 for modern cinemascope films. But OS GUI designs need to catch up to the ever horizontally expanding waistline of our monitors."
MojoKid writes "Intel has been promising it for months, and now the company has officially announced the Intel Atom S1200 SoC. The ultra low power chip is designed for the datacenter and provides a high-density solution designed to lower TCO and improve scalability. The 64-bit, dual-core (four total threads with Hyper-Threading technology) Atom S1200 underpins the third generation of Intel's commercial microservers and feature a mere 6W TDP that allows a density of over 1,000 nodes per rack. The chip also includes ECC and supports Intel Virtualization technology. Intel saw a need for a processor that can handle many simultaneous lightweight workloads, such as dedicated web hosting for sites that individually have minimal requirements, basic L2 switching, and low-end storage needs. Intel did not divulge pricing, but regardless, this device will provide direct competition for AMD's SeaMicro server platform." Amazing that it supports ECC since Intel seems committed to making you pay through the nose for stuff like that.
An anonymous reader writes "Depending on where you are in the world, blank media may have a secondary tax applied to it. It seems ludicrous that such a tax even be considered, let alone be imposed, and yet an Austrian rights group called IG Autoren isn't happy with such a tax covering just physical media; it wants cloud storage included, too. At the moment, consumers in Austria only pay this tax on blank CDs and DVDs. IG Autoren wants to expand that to include the same range of media as Germany, but also feels that services like Dropbox, SkyDrive, Google Drive etc. all fall under the blank media banner because they offer storage, and therefore should carry the tax — a tax consumers would have to pay on top of the existing price of each service."
gbrumfiel writes "Those hoping to laser their way out of the energy crisis will have to wait a little longer. The U.S. government has unveiled its new plan for laser fusion, and it's not going to happen anytime soon. It all comes down to problems at the National Ignition Facility (NIF), the world's most powerful laser at Lawrence Livermore Lab in California. For the past six years researchers at NIF have been trying to use the laser to spark a fusion reaction in a tiny pellet of hydrogen fuel. Like all fusion, it's tougher than it looks, and their campaign came up short. That left Congress a little bit miffed, so they asked for a new plan. The new plan calls for a more methodical study of fusion, along with a broader approach to achieving it with the NIF. In three years or so, they should know whether the NIF will ever work."
Today we're doing a live interview from 18:30 GMT until 20:30 GMT with long time contributor Luke Leighton of Rhombus Tech. An advocate of Free Software, he's been round the loop that many are now also exploring: looking for mass-volume Factories in China and ARM processor manufacturers that are truly friendly toward Free Software (clue: there aren't any). He's currently working on the first card for the EOMA-68 modular computer card specification based around the Allwinner A10, helping the KDE Plasma Active Team with their upcoming Vivaldi Tablet, and even working to build devices around a new embedded processor with the goal of gaining the FSF's Hardware Endorsement. Ask him anything. (It's no secret that he's a Slashdot reader, so expect answers from lkcl.)
angry tapir writes "Researchers in the U.S. have developed integrated circuits that can stick to the skin like a child's tattoo and in some cases dissolve in water when they're no longer needed. The 'bio chips' can be worn comfortably on the body to help diagnose and treat illnesses. The circuits are so thin that when they're peeled away from the body they hang like a sliver of dead skin, with a tangle of fine wires visible under a microscope. Similar circuits could one day be wrapped around the heart like 'an electronic pericardium' to correct irregularities such as arrhythmia."
MrSeb writes with news on the happenings with next generation fabrication processes. From the article: "... Intel's 22nm SoC unveil is important for a host of reasons. As process nodes shrink and more components move on-die, the characteristics of each new node have become particularly important. 22nm isn't a new node for Intel; it debuted the technology last year with Ivy Bridge, but SoCs are more complex than CPU designs and create their own set of challenges. Like its 22nm Ivy Bridge CPUs, the upcoming 22nm SoCs rely on Intel's Tri-Gate implementation of FinFET technology. According to Intel engineer Mark Bohr, the 3D transistor structure is the principle reason why the company's 22nm technology is as strong as it is. Earlier this year, we brought you news that Nvidia was deeply concerned about manufacturing economics and the relative strength of TSMC's sub-28nm planar roadmap. Morris Chang, TSMC's CEO, has since admitted that such concerns are valid, given that performance and power are only expected to increase by 20-25% as compared to 28nm. The challenge for both TSMC and GlobalFoundries is going to be how to match the performance of Intel's 22nm technology with their own 28nm products. 20nm looks like it won't be able to do so, which is why both companies are emphasizing their plans to move to 16nm/14nm ahead of schedule. There's some variation on which node comes next; both GlobalFoundries and Intel are talking up 14nm; TSMC is implying a quick jump to 16nm. Will it work? Unknown. TSMC and GlobalFoundries both have excellent engineers, but FinFET is a difficult technology to deploy. Ramping it up more quickly than expected while simultaneously bringing up a new process may be more difficult than either company anticipates."
Nerval's Lobster writes "Game developer David Bolton writes: 'For my development of Web games, I've hit a point where I need a Virtual Private Server. (For more on this see My Search for Game Hosting Begins.) I initially chose a Windows VPS because I know Windows best. A VPS is just an Internet-connected computer. "Virtual" means it may not be an actual physical computer, but a virtualized host, one of many, each running as if it were a real computer. Recently, though, I've run into a dead end, as it turns out that Couchbase doesn't support PHP on Windows. So I switched to a Linux VPS running Ubuntu server LTS 12-04. Since my main desktop PC runs Windows 7, the options to access the VPS are initially quite limited, and there's no remote desktop with a Linux server. My VPS is specified as 2 GB of ram, 2 CPUs and 80 GB of disk storage. The main problem with a VPS is that you have to self-manage it. It's maybe 90% set up for you, but you need the remaining 10%. You may have to install some software, edit a config file or two and occasionally bounce (stop then restart) daemons (Linux services), after editing their config files.'"
Hugh Pickens writes writes "AP reports that if disaster strikes a US nuclear power plant, the utility industry wants the ability to fly in heavy-duty equipment from regional hubs to stricken reactors to avert a meltdown providing another layer of defense in case a Fukushima-style disaster destroys a nuclear plant's multiple backup systems. 'It became very clear in Japan that utilities became quickly overwhelmed,' says Joe Pollock, vice president for nuclear operations at the Nuclear Energy Institute, an industry lobbying group that is spearheading the effort. US nuclear plants already have backup safety systems and are supposed to withstand the worst possible disasters in their regions, including hurricanes, tornadoes, floods and earthquakes. But planners can be wrong. The industry plan, called FLEX, is the nuclear industry's method for meeting new US Nuclear Regulatory Commission rules that will force 65 plants in the US to get extra emergency equipment on site and store it protectively. The FLEX program is supposed to help nuclear plants handle the biggest disasters. Under the plan, plant operators can summon help from the regional centers in Memphis and Phoenix. In addition to having several duplicate sets of plant emergency gear, industry officials say the centers will likely have heavier equipment that could include an emergency generator large enough to power a plant's emergency cooling systems, equipment to treat cooling water and extra radiation protection gear for workers. Federal regulators must still decide whether to approve the plans submitted by individual plants. 'They need to show us not just that they have the pump, but that they've done all the appropriate designing and engineering so that they have a hookup for that pump,' says NRC spokesman Scott Burnell said. 'They're not going to be trying to figure out, "Where are we going to plug this thing in?"'"