MrSeb writes "Engineers at NC State University (NCSU) have discovered a way of boosting the throughput of busy WiFi networks by up to 700%. Perhaps most importantly, the breakthrough is purely software-based, meaning it could be rolled out to existing WiFi networks relatively easily — instantly improving the throughput and latency of the network. As wireless networking becomes ever more prevalent, you may have noticed that your home network is much faster than the WiFi network at the airport or a busy conference center. The primary reason for this is that a WiFi access point, along with every device connected to it, operates on the same wireless channel. This single-channel problem is also compounded by the fact that it isn't just one-way; the access point also needs to send data back to every connected device. To solve this problem, NC State University has devised a scheme called WiFox. In essence, WiFox is some software that runs on a WiFi access point (i.e. it's part of the firmware) and keeps track of the congestion level. If WiFox detects a backlog of data due to congestion, it kicks in and enables high-priority mode. In this mode, the access point gains complete control of the wireless network channel, allowing it to clear its backlog of data. Then, with the backlog clear, the network returns to normal. We don't have the exact details of the WiFox scheme/protocol (it's being presented at the ACM CoNEXT conference in December), but apparently it increased the throughput of a 45-device WiFi network by 700%, and reduced latency by 30-40%."
sfcrazy writes "A very serious argument erupted on the Linux kernel mailing list when Andy Grover, a Red Hat SCSI target engineer, requested that Nicholas A. Bellinger, the Linux SCSI target maintainer, provide proof of non-infringement of the GPL. Nick is developer at Rising Tide Systems, a Red Hat competitor, and a maker of advanced SCSI storage systems. Nick's company recently produced a groundbreaking technology involving advanced SCSI commands which will give Rising Tide Systems a lead in producing SCSI storage systems. Now, RTS is blocking Red Hat from getting access to that code as it's proprietary. What's uncertain is whether RTS' code is covered by GPL or not — if it is then Red Hat has all the rights to get access to it and it's a serious GPL violation."
MrSeb writes "Alternative memory standards have been kicking around for decades as researchers have struggled to find the hypothetical holy grail — a non-volatile, low-latency, low-cost product that could scale from hard drives to conventional RAM. NAND flash has become the high-speed, non-volatile darling of the storage industry, but if you follow the evolution of the standard, you'll know that NAND is far from perfect. The total number of read/write cycles and data duration if the drive isn't kept powered are both significant problems as process shrinks continue scaling downward. Thus far, this holy grail remains elusive, but a practical MRAM (Magnetoresistive Random Access Memory) solution took a step towards fruition this week. Everspin has announced that it's shipping the first 64Mb ST-MRAM in a DDR3-compatible module. These modules transfer data at DDR3-1600 clock rates, but access latencies are much lower than flash RAM, promising an overall 500x performance increase over conventional NAND."
Penurious Penguin writes "Via LXer, an article from PCWorld describes the A13-OLinuXino, produced by OLIMEX. Similar, but distinct from the Raspberry Pi, the Linux-powered OLinuXino is touted as 'fully open,' with all CAD files and source-code freely available for both personal and commercial reuse. Its specs include an Allwinner A13 Cortex A8 1GHz processor, 3D Maili400 GPU, 512MB RAM, all packed into a nano-ITX form and fit for operation in industrial environments between -25C and 85C. The device comes with Android 4.0, but is capable of running other Linux distros, e.g., ArchlinuxARM."
holy_calamity writes "France now has a dedicated cellular data network just for Internet-of-Things devices, and the company that built it is rolling out the technology elsewhere, says MIT Technology Review. SigFox's network is slower than a conventional cellular data network, but built using technology able to make much longer range links and operate on unlicensed spectrum. Those features are intended to allow the service to be cheap enough for low cost sensors on energy infrastructure and many other places to make sense, something not possible on a network shared with smartphones and other consumer devices."
Zothecula writes "Researchers at the CNRS-AIST Joint Robotics Laboratory (a collaboration between France's Centre National de la Recherche Scientifique and Japan's National Institute of Advanced Industrial Science and Technology) are developing software that allows a person to drive a robot with their thoughts alone. The technology could one day give a paralyzed patient greater autonomy through a robotic agent or avatar."
kkleiner writes "Foxconn, the Chinese electronics manufacturer that builds numerous mobile devices and gaming consoles, previously said the company would be aiming to replace 1 million Foxconn workers with robots within 3 years. It appears as if Foxconn has started the ball in motion. Since the announcement, a first batch of 10,000 robots — aptly named Foxbots — appear to have made their way into at least one factory, and by the end of 2012, another 20,000 more will be installed"
Hugh Pickens writes "The NY Times reports that according to a report by the International Energy Agency, the U.S. will overtake Saudi Arabia as the world's leading oil producer by about 2017, will become a net oil exporter by 2030, and will become 'all but self-sufficient' in meeting its energy needs in about two decades — a 'dramatic reversal of the trend' in most developed countries. 'The foundations of the global energy systems are shifting,' says Fatih Birol, chief economist at the Paris-based organization, which produces the annual World Energy Outlook. There are several components of the sudden shift in the world's energy supply, but the prime mover is a resurgence of oil and gas production in the United States, particularly the unlocking of new reserves of oil and gas found in shale rock. The widespread adoption of techniques like hydraulic fracturing and horizontal drilling has made those reserves much more accessible, and in the case of natural gas, resulted in a vast glut that has sent prices plunging. The agency's report was generally 'good news' for the United States says Michael A. Levi, senior fellow for energy and environment at the Council on Foreign Relations, because it highlights the nation's new sources of energy but Levi cautions that being self-sufficient does not mean that the country will be insulated from seesawing energy prices, since those oil prices are set by global markets. The message is more sobering for the planet, in terms of climate change. Although natural gas is frequently promoted for being relatively low in carbon emissions compared to oil or coal, the new global energy market could make it harder to prevent dangerous levels of warming (PDF). 'The report confirms that, given the current policies, we will blow past every safe target for emissions,' says Levi. 'This should put to rest the idea that the boom in natural gas will save us from that.'" The folks over at The Oil Drum aren't quite so optimistic: shale reserves may have an abysmal EROI. And, of course, Global Warming is a liberal myth.
MojoKid writes "Nvidia is taking the wraps off a new GPU targeted at HPC and as expected, it's a monster. The Nvidia K20, based on the GK110 GPU, weighs in at 7.1B transistors, double the previous gen GK104's 3.54B. The GK110 is capable of pairing double-precision operations with other instructions (Fermi and GK104 couldn't) and the number of registers each thread can access has been quadrupled, from 63 to 255. Threads within a warp are now capable of sharing data. K20 also supports a greater number of atomic operations and brings new features to the table including Dynamic Parallelism. Meanwhile, AMD has announced a new FirePro graphics card at SC12 today, and it's aimed at server workloads and data center deployment. Rumors of a dual-core Radeon 7990 have floated around since before the HD 7000 series debuted, but this is the first time we've seen such a card in the wild. On paper, AMD's new FirePro S10000 is a serious beast. Single and double-precision rates at 5.9 TFLOPS and 1.48 TFLOPS respectively are higher than anything from Intel or Nvidia, as is the card's memory bandwidth. The flip side to these figures, however, is the eye-popping power draw. At 375W, the S10000 needs a pair of eight-pin PSU connectors. The S10000 is aimed at the virtualization market with its dual-GPUs on a single-card offering a good way to improve GPU virtualization density inside a single server." My entire computer uses less power than one of these cards.
cstacy writes "The Inamori Foundation has awarded the Kyoto Prize to graphics pioneer Ivan Sutherland, for developing Sketchpad in 1963. The award recognizes significant technical, scientific and artistic contributions to the 'betterment of mankind, and honors Sutherland him for nearly 50 years of demonstrating that computer graphics could be used "for both technical and artistic purposes.'"
Lasrick writes "Blake Clayton has an excellent piece on the cyber threat to the global oil supply. His description of the August attack on Saudi Aramco, which rendered thirty thousand of its computers useless, helps make his point. From the article: 'The future of energy insecurity has arrived. In August, a devastating cyber attack rocked one of the world’s most powerful oil companies, Saudi Aramco, Riyadh’s state-owned giant, rendering thirty thousand of its computers useless. This was no garden-variety breach. In the eyes of U.S. defense secretary Leon Panetta, it was “probably the most destructive attack that the private sector has seen to date.”'"
KermMartian writes "It has been nearly two decades since Texas Instruments released the TI-82 graphing calculator, and as the TI-83, TI-83+, and TI-84+ were created in the intervening years, these 6MHz machines have only become more absurdly retro, complete with 96x64-pixel monochome LCDs and a $120 price tag. However, a student member of a popular graphing calculator hacking site has leaked pictures and details about a new color-screen TI-84+ calculator, verified to be coming soon from Texas Instruments. With the lukewarm reception to TI's Nspire line, it seems to be an attempt to compete with Casio's popular color-screen Prizm calculator. Imagine the graphs (and games!) on this new 320x240 canvas."
First time accepted submitter GinaSmith888 writes "This is a deep dive in the BP protocol Vint Cerf developed that is the heart of NASA's Delay-Tolerant Networking, better known as DTN. From the article: 'The big difference between BP and IP is that, while IP assumes a more or less smooth pathway for packets going from start to end point, BP allows for disconnections, glitches and other problems you see commonly in deep space, Younes said. Basically, a BP network — the one that will the Interplanetary Internet possible — moves data packets in bursts from node to node, so that it can check when the next node is available or up.'"
holy_calamity writes "PCs will inevitably shift over to ARM-based chips because efficiency now matters more than gains in raw performance, the CEO of chip designer ARM tells MIT Technology Review. He also says the increasing adoption of ARM-based suppliers is good for innovation (and for prices) because it spurs a competitive environment. 'There’s been a lot more innovation in the world of mobile phones over the last 15-20 years than there has been in the world of PCs.'"
says Wikipedia. More often than not, in studio recordings reverb is added digitally; virtually every FOSS or proprietary sound-editing program has a built-in reverb utility. But what if you're the sort of purist who prefers the analog sound of vinyl records to the digital sound of MP3s or CDs? What if you're the kind of musician who records at the original Sun Studio in Memphis to get that original rock and roll sound? That may be overly picky for most musicians, but there are some who would rather sound like Johnny Cash than Flavor Flav, and they're the ones who are going to insist on real analog reverb instead of twiddling a setting in Audacity. There are many types of analog reverbs, of course. One of the purest types, preferred by many audio purists, is the adjustable plate reverb, and Jim Cunnigham's Ecoplate is considered by many to be the best plate reverb ever -- which brings us to Mike Storey, who wanted an Ecoplate-type plate reverb so badly that he spent eight months building one. He'll run your audio files through it for a (highly negotiable) fee, and maybe give you a bit of advice if you want to build your own, although his biggest piece of advice for you (at the end of the video) to think long and hard before you become a home-brew reverberator, with or without advice and components from Jim Cunningham.
cylonlover writes "Our ears work by converting the vibrations of the eardrum into electrochemical signals that can be interpreted by the brain. The current for those signals is supplied by an ion-filled chamber deep within the inner ear – it's essentially a natural battery. Scientists are now looking at using that battery to power devices that could be implanted in the ear, without affecting the recipient's hearing. The 'battery chamber' is located in the cochlea. It is internally divided by a membrane, some of the cells of which are designed to pump ions. The arrangement of those specialized cells, combined with an imbalance of potassium and sodium ions on opposite sides of the membrane, are what creates the electrical voltage. A team of scientists from MIT, the Massachusetts Eye and Ear Infirmary, and the Harvard-MIT Division of Health Sciences and Technology have recently succeeded in running an ultra-low-power radio-transmitting chip using power from these battery chambers – in guinea pigs' ears."
crookedvulture writes "Last October, Thailand was hit by massive flooding that put much of the world's hard drive industry under water. Production slowed to a crawl as drive makers and their suppliers mopped up the damage, and prices predictably skyrocketed. One year later, production has rebounded, with the industry expected to ship more drives in 2012 than it did in 2011. For the most part, though, hard drive prices haven't returned to pre-flood levels. Although 2.5" notebook drives are a little cheaper now than before the flood, the average price of 3.5" desktop drives is up 35% from a year ago. Prices have certainly fallen dramatically from their post-flood peaks, but the rate of decline has slowed substantially in recent months, suggesting that higher prices are the new norm for desktop drives."
Nerval's Lobster writes "Facebook's engineers face a considerable challenge when it comes to managing the tidal wave of data flowing through the company's infrastructure. Its data warehouse, which handles over half a petabyte of information each day, has expanded some 2500x in the past four years — and that growth isn't going to end anytime soon. Until early 2011, those engineers relied on a MapReduce implementation from Apache Hadoop as the foundation of Facebook's data infrastructure. Still, despite Hadoop MapReduce's ability to handle large datasets, Facebook's scheduling framework (in which a large number of task trackers that handle duties assigned by a job tracker) began to reach its limits. So Facebook's engineers went to the whiteboard and designed a new scheduling framework named Corona." Facebook is continuing development on Corona, but they've also open-sourced the version they currently use.
MojoKid writes "Intel has unveiled details of their new Itanium 9500 family, codenamed Poulson, and the new CPU appears to be the most significant refresh Intel has ever done to the Itanium architecture. Moving from 65nm to 32nm technology substantially reduces power consumption and increases clock speeds, but Intel has also overhauled virtually every aspect of the CPU. Poulson can issue 11 instructions per cycle compared to the previous generation Itanium's six. It adds execution units and re-balances those units to favor server workloads over HPC and workstation capabilities. Its multi-threading capabilities have been overhauled and it uses faster QPI links between CPU cores. The L3 cache design has also changed. Previous Itanium 9300 processors had a dedicated L3 cache for each core. Poulson, in contrast, has a unified L3 that's attached to all its cores by a common ring bus. All told, the new architecture is claimed to offer more than twice the performance of the previous generation Itanium."
An anonymous reader writes "Der Spiegel reports that Germany has exported more electricity this year than ever before, despite beginning to phase out nuclear power. In the first three quarters of 2012, Germany sent 12.3 terawatt hours of electricity across its borders. The country's rapid expansion into renewable energy is credited with the growth. However, the boost doesn't come without a price. The German government's investments into its new energy policy will end up costing hundreds of billions of dollars over the next two decades, and it still relies on imports for its natural gas needs. It also remains to be seen whether winter will bring power shortages. Is Germany a good example of forward-looking energy policy?"