angry tapir writes "As part of a $1 billion upgrade of its city campus, the University of Technology, Sydney is installing an underground automated storage and retrieval system (ASRS) for its library collection. The ASRS is in response to the need to house a growing collection and free up physical space for the new 'library of the future', which is to open in 2015 to 2016, so that people can be at the center of the library rather than the books. The ASRS, which will connect to the new library, consists of six 15-meter high robotic cranes that operate bins filled with books. When an item is being stored or retrieved, the bins will move up and down aisles as well as to and from the library. Items will be stored in bins based on their spine heights. About 900,000 items will be stored underground, starting with 60 per cent of the library's collection and rising to 80 per cent. About 250,000 items purchased from the last 10 years will be on open shelves in the library. As items age, they will be relegated to the underground storage facility. The University of Chicago has invested in a similar system."
Migrate from GitHub to SourceForge quickly and easily with this tool. Check out all of SourceForge’s recent improvements.×
crookedvulture writes "AMD is bundling a stack of the latest games with graphics cards like its Radeon HD 7950. One might expect the Radeon to perform well in those games, and it does. Sort of. The Radeon posts high FPS numbers, the metric commonly used to measure graphics performance. However, it doesn't feel quite as smooth as the competing Nvidia solution, which actually scores lower on the FPS scale. This comparison of the Radeon HD 7950 and GeForce 660 Ti takes a closer look at individual frame latencies to explain why. Turns out the Radeon suffers from frequent, measurable latency spikes that noticeably disrupt the smoothness of animation without lowering the FPS average substantially. This trait spans multiple games, cards, and operating systems, and it's 'raised some alarms' internally at AMD. Looks like Radeons may have problems with smooth frame delivery in new games despite boasting competitive FPS averages."
skade88 writes "LG has released an ultra wide monitor. It really is wide (WxHxD: 699.7 X 387 X 208.5 mm) — take a look at the thing! It looks like it would be good for movies shot in larger aspect ratios such as 2.20 for 70mm film or 2.39 for modern cinemascope films. But OS GUI designs need to catch up to the ever horizontally expanding waistline of our monitors."
MojoKid writes "Intel has been promising it for months, and now the company has officially announced the Intel Atom S1200 SoC. The ultra low power chip is designed for the datacenter and provides a high-density solution designed to lower TCO and improve scalability. The 64-bit, dual-core (four total threads with Hyper-Threading technology) Atom S1200 underpins the third generation of Intel's commercial microservers and feature a mere 6W TDP that allows a density of over 1,000 nodes per rack. The chip also includes ECC and supports Intel Virtualization technology. Intel saw a need for a processor that can handle many simultaneous lightweight workloads, such as dedicated web hosting for sites that individually have minimal requirements, basic L2 switching, and low-end storage needs. Intel did not divulge pricing, but regardless, this device will provide direct competition for AMD's SeaMicro server platform." Amazing that it supports ECC since Intel seems committed to making you pay through the nose for stuff like that.
An anonymous reader writes "Depending on where you are in the world, blank media may have a secondary tax applied to it. It seems ludicrous that such a tax even be considered, let alone be imposed, and yet an Austrian rights group called IG Autoren isn't happy with such a tax covering just physical media; it wants cloud storage included, too. At the moment, consumers in Austria only pay this tax on blank CDs and DVDs. IG Autoren wants to expand that to include the same range of media as Germany, but also feels that services like Dropbox, SkyDrive, Google Drive etc. all fall under the blank media banner because they offer storage, and therefore should carry the tax — a tax consumers would have to pay on top of the existing price of each service."
gbrumfiel writes "Those hoping to laser their way out of the energy crisis will have to wait a little longer. The U.S. government has unveiled its new plan for laser fusion, and it's not going to happen anytime soon. It all comes down to problems at the National Ignition Facility (NIF), the world's most powerful laser at Lawrence Livermore Lab in California. For the past six years researchers at NIF have been trying to use the laser to spark a fusion reaction in a tiny pellet of hydrogen fuel. Like all fusion, it's tougher than it looks, and their campaign came up short. That left Congress a little bit miffed, so they asked for a new plan. The new plan calls for a more methodical study of fusion, along with a broader approach to achieving it with the NIF. In three years or so, they should know whether the NIF will ever work."
Today we're doing a live interview from 18:30 GMT until 20:30 GMT with long time contributor Luke Leighton of Rhombus Tech. An advocate of Free Software, he's been round the loop that many are now also exploring: looking for mass-volume Factories in China and ARM processor manufacturers that are truly friendly toward Free Software (clue: there aren't any). He's currently working on the first card for the EOMA-68 modular computer card specification based around the Allwinner A10, helping the KDE Plasma Active Team with their upcoming Vivaldi Tablet, and even working to build devices around a new embedded processor with the goal of gaining the FSF's Hardware Endorsement. Ask him anything. (It's no secret that he's a Slashdot reader, so expect answers from lkcl.)
angry tapir writes "Researchers in the U.S. have developed integrated circuits that can stick to the skin like a child's tattoo and in some cases dissolve in water when they're no longer needed. The 'bio chips' can be worn comfortably on the body to help diagnose and treat illnesses. The circuits are so thin that when they're peeled away from the body they hang like a sliver of dead skin, with a tangle of fine wires visible under a microscope. Similar circuits could one day be wrapped around the heart like 'an electronic pericardium' to correct irregularities such as arrhythmia."
MrSeb writes with news on the happenings with next generation fabrication processes. From the article: "... Intel's 22nm SoC unveil is important for a host of reasons. As process nodes shrink and more components move on-die, the characteristics of each new node have become particularly important. 22nm isn't a new node for Intel; it debuted the technology last year with Ivy Bridge, but SoCs are more complex than CPU designs and create their own set of challenges. Like its 22nm Ivy Bridge CPUs, the upcoming 22nm SoCs rely on Intel's Tri-Gate implementation of FinFET technology. According to Intel engineer Mark Bohr, the 3D transistor structure is the principle reason why the company's 22nm technology is as strong as it is. Earlier this year, we brought you news that Nvidia was deeply concerned about manufacturing economics and the relative strength of TSMC's sub-28nm planar roadmap. Morris Chang, TSMC's CEO, has since admitted that such concerns are valid, given that performance and power are only expected to increase by 20-25% as compared to 28nm. The challenge for both TSMC and GlobalFoundries is going to be how to match the performance of Intel's 22nm technology with their own 28nm products. 20nm looks like it won't be able to do so, which is why both companies are emphasizing their plans to move to 16nm/14nm ahead of schedule. There's some variation on which node comes next; both GlobalFoundries and Intel are talking up 14nm; TSMC is implying a quick jump to 16nm. Will it work? Unknown. TSMC and GlobalFoundries both have excellent engineers, but FinFET is a difficult technology to deploy. Ramping it up more quickly than expected while simultaneously bringing up a new process may be more difficult than either company anticipates."
Nerval's Lobster writes "Game developer David Bolton writes: 'For my development of Web games, I've hit a point where I need a Virtual Private Server. (For more on this see My Search for Game Hosting Begins.) I initially chose a Windows VPS because I know Windows best. A VPS is just an Internet-connected computer. "Virtual" means it may not be an actual physical computer, but a virtualized host, one of many, each running as if it were a real computer. Recently, though, I've run into a dead end, as it turns out that Couchbase doesn't support PHP on Windows. So I switched to a Linux VPS running Ubuntu server LTS 12-04. Since my main desktop PC runs Windows 7, the options to access the VPS are initially quite limited, and there's no remote desktop with a Linux server. My VPS is specified as 2 GB of ram, 2 CPUs and 80 GB of disk storage. The main problem with a VPS is that you have to self-manage it. It's maybe 90% set up for you, but you need the remaining 10%. You may have to install some software, edit a config file or two and occasionally bounce (stop then restart) daemons (Linux services), after editing their config files.'"
Hugh Pickens writes writes "AP reports that if disaster strikes a US nuclear power plant, the utility industry wants the ability to fly in heavy-duty equipment from regional hubs to stricken reactors to avert a meltdown providing another layer of defense in case a Fukushima-style disaster destroys a nuclear plant's multiple backup systems. 'It became very clear in Japan that utilities became quickly overwhelmed,' says Joe Pollock, vice president for nuclear operations at the Nuclear Energy Institute, an industry lobbying group that is spearheading the effort. US nuclear plants already have backup safety systems and are supposed to withstand the worst possible disasters in their regions, including hurricanes, tornadoes, floods and earthquakes. But planners can be wrong. The industry plan, called FLEX, is the nuclear industry's method for meeting new US Nuclear Regulatory Commission rules that will force 65 plants in the US to get extra emergency equipment on site and store it protectively. The FLEX program is supposed to help nuclear plants handle the biggest disasters. Under the plan, plant operators can summon help from the regional centers in Memphis and Phoenix. In addition to having several duplicate sets of plant emergency gear, industry officials say the centers will likely have heavier equipment that could include an emergency generator large enough to power a plant's emergency cooling systems, equipment to treat cooling water and extra radiation protection gear for workers. Federal regulators must still decide whether to approve the plans submitted by individual plants. 'They need to show us not just that they have the pump, but that they've done all the appropriate designing and engineering so that they have a hookup for that pump,' says NRC spokesman Scott Burnell said. 'They're not going to be trying to figure out, "Where are we going to plug this thing in?"'"
An anonymous reader writes "After more than a decade of research, and a proof of concept in 2010, IBM Research has finally cracked silicon nanophotonics (or CMOS-integrated nanophotonics, CINP, to give its full name). IBM has become the first company to integrate electrical and optical components on the same chip, using a standard 90nm semiconductor process. These integrated, monolithic chips will allow for cheap chip-to-chip and computer-to-computer interconnects that are thousands of times faster than current state-of-the-art copper and optical networks. Where current interconnects are generally measured in gigabits per second, IBM's new chip is already capable of shuttling data around at terabits per second, and should scale to peta- and exabit speeds."
dcblogs writes "Apple's planned investment of $100 million next year in a U.S. manufacturing facility is relatively small, but still important. A 2009 Apple video of its unibody manufacturing process has glimpses of highly automated robotic systems shaping the metal. In it, Jonathan Ive, Apple's senior vice president of design, described it. 'Machining enables a level of precision that is just completely unheard of in this industry,' he said. Apple has had three years to improve its manufacturing technology, and will likely rely heavily on automation to hold down labor costs, say analysts and manufacturers. Larry Sweet, the CTO of Symbotic, which makes autonomous mobile robots for use in warehouse distribution, described a possible scenario for Apple's U.S. factory. First, a robot loads the aluminum block into the robo-machine that has a range of tools for cutting and drilling shapes to produce the complex chassis as a single precision part. A robot then unloads the chassis and sends it down a production line where a series of small, high-precision, high-speed robots insert parts, secured either with snap fit, adhesive bonds, solder, and a few fasteners, such as screws. At the end, layers, such as the display and glass, are added on top and sealed in another automated operation. Finally, the product is packaged and packed into cases for shipping, again with robots. "One of the potentially significant things about the Apple announcement is it could send a message to American companies — you can do this — you can make this work here," said Robert Atkinson, president of The Information Technology & Innovation Foundation."
coop0030 writes "Ladyada and pt had an old NeXT keyboard with a strong desire to get it running on a modern computer. These keyboards are durable, super clicky, and very satisfying to use! However, they are very old designs, specifically made for NeXT hardware: pre PS/2 and definitely pre-USB. That means you can't just plug the keyboard into a PS/2 port (even though it looks similar). There is no existing adapters for sale, and no code out there for getting these working, so we spent a few days and with a little research we got it working perfectly using an Arduino Micro as the go between."
jcreus writes "After struggling for some years with Nvidia cards (the laptop from which I am writing this has two graphic cards, an Intel one and Nvidia one, and is a holy mess [I still haven't been able to use the Nvidia card]) and, encouraged by Torvalds' middle finger speech, I've decided to ditch Nvidia for something better. I am expecting to buy another laptop and, this time, I'd like to get it right from the start. It would be interesting if it had decent graphics support and, in general, were Linux friendly. While I know Dell has released a Ubuntu laptop, it's way off-budget. My plan is to install Ubuntu, Kubuntu (or even Debian), with dual boot unfortunately required." So: what's the state of the art for out-of-the-box support?