Phase Change Memory vs. Storage As We Know It 130
storagedude writes "Access to data isn't keeping pace with advances in CPU and memory, creating an I/O bottleneck that threatens to make data storage irrelevant. The author sees phase change memory as a technology that could unseat storage networks. From the article: 'While years away, PCM has the potential to move data storage and storage networks from the center of data centers to the periphery. I/O would only have to be conducted at the start and end of the day, with data parked in memory while applications are running. In short, disk becomes the new tape."
We've heard this forever... (Score:2)
... the death of x tech here, it will eventually die once the groundwork has been laid to migrate to a better system.
Re: (Score:2, Funny)
Re: (Score:1)
Advances in storage not keeping up with advances in CPU/RAM doesn't make it irrelevant. It puts it squarely on the critical path.
Re: (Score:3, Insightful)
Despite what the article writer thinks, if PCM is that great, the storage manufacturers will just create storage devices that use PCM technology. The other option is to go out of business
I see lots of "normal" people using external storage drives. These people are far less likely to open up their computer and swap chips on their motherboard.
Transferring 1TB from my house to my office by hand is faster and more reliable than using my crappy ISP. If the writer thinks storage IO speeds are b
We're almost there already (Score:2)
When you can pick up 4GB of RAM memory for a song, why not load the whole OS into memory? As long as you don't suffer a system crash, you can unload it back to disk when you're done.
Re: (Score:2)
When you can pick up 4GB of RAM memory for a song
A song costs 99 cents on iTunes. 4 GB of DDR/DDR2/DDR3 RAM costs far more, and it might not even fit in some older or mobile motherboards.
why not load the whole OS into memory?
Puppy Linux does, and Windows Vista almost does (see SuperFetch).
As long as you don't suffer a system crash
Power failure happens.
Re: (Score:3, Interesting)
Power failure happens.
That's what journaling is for.
Load the system image into RAM at boot from the "image source".
Journal changes to user datafiles.
When a certain number of transactions have occured, commit them back to the main disk.
If the system crashes... load the "boot volume" back up, replay the journal.
No need to journal changes to the "system files" file system (that isn't supposed to change anyways). If a system update is to be applied, the signed update package gets loaded int
Re: (Score:3, Informative)
Interestingly, this closely resembles the discussion of the system image used in Xerox PARC Smalltalk....
--dave
Re: (Score:2)
No, that's what a UPS is for. :)
Re: (Score:2)
Re: (Score:2)
who buys desktops these days? most people seems to be getting laptops, even tho they run them of mains most of the time. Best thing, they have built in UPS ;)
Why desktop PCs still exist (Score:2)
who buys desktops these days?
People who want a full-size keyboard, video, and mouse, and who don't want to pay for a duplicate mini-keyboard, mini-monitor, and mini-mouse built into a laptop. They either A. drive to work and thus never have enough time as a passenger on mass transit to make using the computer away from the docking station worth it, or B. have a smartphone, a handheld gaming device, an e-book reader, or even a paperback book to pass the time.
Or people who use video games, certain kinds of CAD software, or other softw
Re: (Score:2)
I do plan to buy another desktop soon. And a tablet PC too.
One doesn't negate the other. Unless you only read email and facebook, that's it.
Re: (Score:1)
Even UPSes have fuses that can blow / breakers that can trip. A UPS can overload.
Someone can accidentally hit the EPO, or power-off switch on the UPS.
The UPS battery may be too low to permit a graceful shutdown before power expires.
The PC power supply can fail.
Someone could trip over the power cord running to the PC.
Even with a solid UPS, it doesn't require much imagination at all to recognize how likely a power failure or 'hard down' is to occur eventually.
Losing all your data/changes in such
Re: (Score:2)
Journal changes to user datafiles.
When a certain number of transactions have occured, commit them back to the main disk.
What is "a certain number" that won't require the disk to be spun up all the time committing transactions?
the RAM image will be restored to the same state it was in as of the unexpected shutdown/crash.
It will be restored to the same state: a crashed state.
Re: (Score:1)
What is "a certain number" that won't require the disk to be spun up all the time committing transactions?
Why spun up? use a write-optimized SSD for the journal, and compact flash for the rarely-changing system boot image.
"A certain number", the exact choice is a design/engineering concern, but probably fairly small values should be used, to avoid data loss.
It will be restored to the same state: a crashed state.
Well, of course, the filesystem would be in the same state as at the time of the crash
Re: (Score:2)
Why spun up?
Because you still need to spin up the drive to read in data files that the user is working on if either A. the user hasn't opened them since the computer last came out of sleep (compulsory miss) or B. the files collectively are too big to fit in RAM (capacity miss).
use a write-optimized SSD for the journal
That's actually a good idea because a journal can be stored as a ring buffer, and a ring buffer is the theoretical best case for SSD wear and write speed. But one problem with journaling writes to data is that the user expects shutdown to be fast
Re: (Score:1)
the files collectively are too big to fit in RAM (capacity miss).
I think the premise is RAM becomes cheap.. so you can have enough of it to hold the entire filesystem. E.g. 64gb or 128gb of RAM easily meets the needs of most users, with a couple gbs to spare for the kernel/app working memory partition.
During the boot process, 2-4gb (or user's choice) is reserved for OS and application working memory, and the rest is partitioned as the RAMDISK. Probably none of the filesystems currently in exis
Re: (Score:2)
Or even while the system is running.. presumably there should be an option like "Safely remove hard drive"
Provided the system even is running. If the operating system will not boot, the "Safely remove hard drive" option would have to be in BIOS.
I'm thinking more along the lines of UNIX systems such as Linux, BSD, MacOS, that don't get updates every tuesday, though.
You're right: Ubuntu gets updates more often than Windows XP does, but granted, fewer of them require a full reboot.
Or the kernel started doing stray writes to the RAMDISK region, e.g. buffer overflow. Or a buggy driver hit the wrong memory area with a DMA.
Bingo. But with a file system on a separate device, file system corruption doesn't seem quite as likely as it would be with a RAM disk.
A hardware IOMMU should be used to split the memory regions
Good luck getting Microsoft operating systems to support any MMU functionality beyond what the operating systems currentl
Re: (Score:2)
>> A song costs 99 cents on iTunes. 4 GB of DDR/DDR2/DDR3 RAM costs far more, and it might not even fit in some older or mobile motherboards.
4GB worth of music on iTunes is going to cost a hell of a lot more than 4GB of system memory. So memory these days can be had for about 40-80 songs....
Re: (Score:2)
Files that haven't been opened yet (Score:2)
Superfetch? You're kidding, right? Real VMs were doing this long before MS figured it out.
NT has always had a disk cache. SuperFetch of Windows 6.x just extends it to files that haven't been opened yet, as in Lord Byron II's suggestion of loading more of the operating system into RAM at startup.
Re: (Score:2)
Superfetch? You're kidding, right? Real VMs were doing this long before MS figured it out. Unused RAM has always been used as disk cache in proper VMs. Only MS was stupid enough to need an *executable* (smartdrv.exe) to accomplish this most fundamental of tasks.
Are you a traveller from the past ? Smartdrv hasn't been relevant to most people for nearly *twenty years*.
Re: (Score:2, Informative)
You may be able to "load the whole OS into memory", but that's missing the point, which is the data people work with once the OS is up and running. If that 4GB was enough to store all the data for the entirety of any conceivable session, on servers as well as desktops, why would anyone ever buy a hard drive larger than that? Hard drives would probably already be obsolete. I bet you own at least one hard drive larger than 4GB - and as the type of person who comments on slashdot, I bet more than 4GB of tha
Re: (Score:2)
> load the whole OS into memory
Replace with "load the whole OS into memory plus the disk content mostly used".
Linux and most OSes already do this for you. Look at the free output below on that 8 Gigs machine. Programs only use 969 Meg (.96 GB) of RAM. Linux has swapped 273 Meg of program memory to disk because it is seldom used (memory leaks ?).
Linux uses 6.9 Gig for buffers/cache which is more than the whole OS loaded into memory. It caches disk content into RAM, so in the end, there is only 45 Meg not
Re: (Score:1)
a) The bottleneck in pricing, I don't see 64 gig memory modules on the cheap, or supported by any motherboards yet.
b) The initial load of data (whether prefetch or whatever) that I want to work with is still constrained by whatever it's stored on.
I'd love to have a few terabytes of ram. That would work for me... and that's where we're heading. how the OS manages the various levels of RAM (as cache, storage, or whatever) is up for debate, I'm sure we'll see some interesting mechanisms.
(like how ZFS can have
Re:We're almost there already (Score:5, Interesting)
When you can pick up 4GB of RAM memory for a song, why not load the whole OS into memory?
For what it's worth, you can do this with most Linux distros if you know what you're doing. Linux is pretty well designed to act from a ramdisk - you can set it up to copy the system files into RAM on boot and continue from there all in RAM. I've been doing this on my Debian (stable) boxes when I realized I couldn't afford a decent SSD and wanted a super-responsive system. Firefox (well, Iceweasel) starts cold in about two seconds on an eeepc when set up this way, and it starts cold virtually instantly on my C2D box. In fact, everything seems instant on my C2D box. It's really snazzy.
As long as you don't suffer a system crash, you can unload it back to disk when you're done.
Depending on what you're doing, even that may not be an issue. If you're doing massive database stuff, then yes. However, if your disk I/O isn't all heavy you can set a daemon up to automatically mirror changes made in the RAMdisk to the "hard" copy. From your POV everything is instant, but any crash will only result in the loss of data from however far behind the harddrive copy is lagging. Personally, what little I do need saved is simply text files - my notes in class, my homework, etc, and so I can just write to a partition on the harddrive that isn't loaded to RAM. It doesn't suffer at all from the harddrive I/O - I can't really type faster then a harddrive can write.
tl;dr: It's perfectly feasible for (some) people to do as you've described, and it works quite nicely. It's not really necessary to wait for this perpetually will-be-released-in-5-to-10-years technology, it's available today.
Re: (Score:3, Insightful)
Re: (Score:2)
Depending on what you're doing, even that may not be an issue. If you're doing massive database stuff, then yes. However, if your disk I/O isn't all heavy you can set a daemon up to automatically mirror changes made in the RAMdisk to the "hard" copy. From your POV everything is instant, but any crash will only result in the loss of data from however far behind the harddrive copy is lagging. Personally, what little I do need saved is simply text files - my notes in class, my homework, etc, and so I can just write to a partition on the harddrive that isn't loaded to RAM. It doesn't suffer at all from the harddrive I/O - I can't really type faster then a harddrive can write.
tl;dr: It's perfectly feasible for (some) people to do as you've described, and it works quite nicely. It's not really necessary to wait for this perpetually will-be-released-in-5-to-10-years technology, it's available today.
Not just "available", but that's pretty much how all current operating systems work today. Software operates on a copy in memory (wether reading or writing), and OS writes back any changes at it's leisure. It's just a matter of available RAM vs. required RAM, and only if you run out of RAM, only then the disk becomes a bottleneck. I don't think data read from disk to memory is ever discarded even if unused for a long time, unless you run out of RAM (why would it be, that's just unnecessary extra work for OS
Re: (Score:2)
When you can pick up 4GB of RAM memory for a song, why not load the whole OS into memory?
On any remotely modern OS, the whole OS is *already* "loaded into memory" if you have enough of it. It's called a disk cache.
Why the vapourware tag? (Score:5, Insightful)
How soon we forget. The article is speculative, sure, but the hardware is not only real, it's in mass production by Samsung: http://hardware.slashdot.org/article.pl?sid=09/09/28/1959212 [slashdot.org]
Just looking at the numbers, the article is a bit overblown. Phase change memory will first be a good replacement for flash memory, not DRAM. It's still considerably slower than DRAM. But it eliminates the erasable-by-page-only problem that has plagued SSDs, especially Intel SSDs, and the article does mention SSDs as a bright spot in the storage landscape. PCM should make serious inroads into SSDs very quickly because manufacturers can eliminate a whole blob of difficult code. With Samsung's manufacturing muscle behind it, prices per megabyte should be reasonable right out of the gate and as Samsung gets better at it, prices should plummet even faster than flash memory did.
The I/O path between storage and the CPU will get an upgrade, and it could very well be driven by PCM. Flash memory SSDs are already very fast and PCM is claimed to be 4X faster. That saturates the existing I/O paths (barring 16-lane PCIe cards sitting next to the video card in an identical slot). Magnetic hard drives haven't come anywhere close to saturation. Development concentrated for a decade (or two?) on increasing capacity, for which we are thankful, but the successes in capacity development have outrun improvements in I/O speed. In turn, that meant that video cards were the driver behind I/O development, not storage. Now that there's a storage tech in the same throughput class as a video card, I expect there to be a great deal of I/O standards development to deal with it.
But hard drives == tape? Not for a long long time. The development concentration on increasing capacity will pay off for many years to come. PCM arrays with capacities matching modern hard drives (2 TB in a 3.5" half height case. Unreal!) are undoubtedly a long ways off.
Hopefully there are no lurking patent trolls under the PCM bridge...
Re: (Score:2, Insightful)
The only thing plaguing Intel SSDs is price. And I don't think that particular aspect makes Intel real sad.
Re: (Score:2)
Re: (Score:2)
In reality, they would make buttloads more money on $2 @ $100 than $20 at $1000, because far more than ten times the number of people who buy at $1000 would buy at $100. With the quality and usefulness of the intel product, you could easily put the purchase rate at 1/10th the price at anywhere from 100 to 1000 times higher.
To make the most money in this situation, you basically want the lowest price that still gives you a profit and you can still keep up with demand. That's a little simplistic, but that's
Re: (Score:3, Interesting)
It probably got tagged vaporware because where the fuck is my system with MRAM for main memory? MRAM is a shipping product, too, but it was "supposed" to be in consumer devices before now, as main System RAM.
Re: (Score:3, Insightful)
Kevin has this right, what an obtuse article.
Henry Newman is talking about PC storage not enterprise storage. He discusses all disk IO performance in MBs/sec, meaning sequential. When in reality, very little (disk level) IO for the enterprise is sequential. The numbers here are flawed as is the characterization of storage.
Storage is where we keep our data. Keeping data is a central requirement of information technology. It will never be a peripheral feature.
Presently the real IO bottleneck is the spinn
disk becomes the new tape (Score:2)
> disk becomes the new tape
Well they got this right even if it was not to be accomplished with the mentioned technology.
I think that in the medium/long time range this will undoubtedly come true.
I mean, would any /. reader bet on the chances of hard drives to come on par with today memory access speeds in the future, even with zillions of years of technological advancement ?
The 70's called. They want their I/O methods back. (Score:5, Informative)
Ya, we had that back in the stone-age and Multics would have been poster-child for this type of thinking, but it was a *bitch* and made portability problematic. I think VMS has some of this type of capability with their Files 11 [wikipedia.org] support - any VMS people care to comment. Unix (and most current OS) sees everything as a stream of bytes, in most cases, and this is much simpler.
An OS cannot be everything to all people all the time...
fadvise (Score:2, Informative)
fadvise and FADV_SEQUENTIAL [die.net] exist in posix. Not sure how well different oses like Linux or bsd use the hints -- I know that some of it's been broken because of bad past implementations.
Re:The 70's called. They want their I/O methods ba (Score:5, Informative)
From TFA:
Ya, we had that back in the stone-age and Multics would have been poster-child for this type of thinking, but it was a *bitch* and made portability problematic.
No, Multics would have been the poster child for "there's no I/O, there's just paging" - file system I/O was done in Multics by mapping the file into your address space and referring to it as if it were memory. ("Multi-segment files" were just directories with a bunch of real files in them, each no larger than the maximum size of a segment. I/O was done through read/write calls, but those were implemented by mapping the file, or the segments of a multi-segment file, into the address space and copying to/from the mapped segment.)
I think VMS has some of this type of capability with their Files 11 [wikipedia.org] support - any VMS people care to comment. Unix (and most current OS) sees everything as a stream of bytes, in most cases, and this is much simpler.
"Seeing everything as a stream of bytes" is orthogonal to "a hint that the file will be read sequentially". See, for example, fadvise() in Linux [die.net], or some of the FILE_FLAG_ options in CreateFile() in Windows [microsoft.com] (Windows being another OS that shows a file as a seekable stream of bytes).
Re: (Score:2)
VMS allows a process to map a file into its address space and use the paging mechanism to do the disk i/o. It is kind of like a private page file. You get fast random access. You get persistence if you close the file/map properly when you are finished with it.
It has been a long time since I looked at this stuff, but I think you could share the mapped file with other local processes. You had to roll your own atomic access with mutexes.
All true of modern UN*Xes and Windows as well.
The difference is that in Multics, that was the core disk file I/O mechanism, atop which read/write-style access was built in ring-4 code. In UN*X, Windows, and, I suspect, VMS, you still have {read()/write(), ReadFile()/WriteFile(), QIO-based reads and writes}. The two mechanisms might, or might not, share their {buffer,page} caches, if the second mechanism has a buffer cache (which it does, by default, on UN*X and Windows, although some UN*Xes and Windows a
Re: (Score:1)
We have it today. Tfa's on crack.
It's called madvise [die.net]
In Linux there is also fadvise() [die.net]
Of course... reading from a file (from an app point of view) is really nothing more than accessing data in a mapped memory area. Oh.. I suppose unless you actually use the POSIX mmap call to map the file into memory for reading, y
Numonyx will probably make it happen (Score:5, Informative)
Numonyx announced some good advances in PCM a few months back:
http://www.pcper.com/comments.php?nid=7930 [pcper.com]
Allyn Malventano
Storage Editor, PC Perspective
Re: (Score:1)
This "author' is pretty much irrelevant (Score:5, Insightful)
I was tempted to stop reading right there, but I kept reading. While his point about POSIX improvements is not bad, the rest of the article is ridiculous. It essentially amounts to: Imagine if we had pretty much exactly what we have today, but we used different words to describe the components of the system! We already have slower external storage (Networked drives / SANs, local hard disk), and incremental means of making data available locally more quickly by degrees (Local Memory, L2 Cache, L1 Cache, etc.) We already get that at the expense of its ability to be accessed by other CPUs a further distance away. It turns out I probably should have stopped reading when I first got the feeling I should when reading the first sentence in the article: "Data storage has become the weak link in enterprise applications, and without a concerted effort on the part of storage vendors, the technology is in danger of becoming irrelevant." I can't wait to answer with that one next time and watch jaws drop:
...
Boss: Where and how are we storing our database, how are do we ensure database availability, and how are we handling backups?
me: You're behind the times Boss. That is now irrelevant!
Yeah. That's the ticket
What to do with solid-state memory? (Score:2)
The real question is whether we need something other than read/write/seek to deal with the various forms of solid-state memory. The usual options are 1) treat it as disk, reading and writing in big blocks, and 2) treat it as another layer of RAM cache, in main memory space. Flash, etc. though have much faster "seek times" than hard drives, and the penalty for reading smaller blocks is thus much lower. Flash also has the property that writing is slower than reading, while for disk the two are about the s
Boon for Linux, Bust for Windows. (Score:2)
Windows is more closely tied to the whole "Separate levels of RAM memory and Hard Disk Memory" than Linux is I could really see Linux get more traction of all systems went to PCM tomorrow.
Re: (Score:2)
Dude, virtual memory architectures of the Linux and Windows kernels (and almost all other modern operating systems) are essentailly the same. Everything runs on x86, and so everything utilizes the memory management features of x86 chips.
Forgetting the lessons of SANs? (Score:5, Interesting)
Maybe these guys ought to ask someone that was around in the days BEFORE there were SANs. Managing storage back then absolutely sucked. Every server had it's own internal storage with it's own raid controller OR had to be within 9m (the max distance of LVD SCSI) of a storage array.
There was no standardization, every OS has it's own volume managers, firmware updates, patches etc etc etc. Plus compare the number of management points when using a SAN vs internal storage. An enterprise would have thousands of servers connecting through a handful of SAN switches to a handful of arrays. Server admins have more important things to do than replace dead hard drives.
Want to replace a hot spare on a server, what a pain. As you had to understand the volume manager or unique raid controller in that specific server. I personally like how my arrays 'call home' and an HDS/EMC engineer shows up with a new drive, replaces the failed one and walks out the door, without me having to do anything about it.
Two words: Low Utilization. You'd buy an HP server with two 36GB drives and the OS+APP+data would only require 10GB of space. So you'd have this land locked storage all over the place.
Moving the storage to the edge? Even if you replace spinning platters with solid state, putting all the data on the edge is a 'bad thing.'
"But Google does it!"
Maybe so, but then again they don't run their enterprise based upon Oracle, Exchange, SAP, CIFS/NFS based home directories etc like almost all other enterprises do.
The SAN argument (Score:5, Interesting)
The SAN argument is that your storage is so precious it must not be stranded. If you're paying $50K/TB with drives, controllers, FC switches, service, software, support, installation and all that jazz then that's absolutely true. If you're doing something like OpenFiler [openfiler.com] clusters on BackBlaze 90TB 5U Storage Pods [backblaze.com] for $90/TB and 720 TB/rack you have a different point of view. As for somebody showing up to replace a drive, I think I could ask Jimmy to put his jacket on and shuffle down to the server room to swap out a few failed drives every couple months - that's what hot and cold spares are for and he's just geeking on MyFace anyway. Low utilization? Use as much or as little as you like - at $90/TB we can afford to buy more. We can afford to overbuy our storage. We can afford to mirror our storage and back it up too. In practice the storage costs less than the meeting where we talk about where to put it or the guy that fills it. If you want to pay for the first tier OEM, it's available but costs 10x as much because first tier OEMs also sell SANs.
Openfiler does CIFS/NFS and offers iSCSI shared storage for Oracle, Exchange and SAP. If you need support, they offer it. [openfiler.com] OpenFiler is nowhere near the only option for this. If you want to pay license fees you could also just run Windows Server clustered. There are BSD options and others as well. Solaris and Open Solaris are well spoken of, and ZFS is popular, though there are some tradeoffs there. Nexenta [wikipedia.org] is gaining ground. There's also Lustre [wikipedia.org], which HP uses in its large capacity filers. Since you're building your own solution you can use as much RAM for cache as you like - modern dual socket servers go up to 192GB per node but 48GB is the sweet spot.
Now that we've moved redundancy into the software and performance into the local storage architecture, moving storage to the edge is exactly what we want to do: put it where you need it and if you need a copy for data mining then mirror it to the mining storage cluster. We still need some good dedicated fiber links to do multisite synchronous replication for HA, but that's true of SAN solutions also. We're about 20 years past when we should have ubiquitous metro fiber connections, and that's annoying. Right now without the metro fiber the best solution is to use application redundancy: putting a database cluster member server in the DR site with local shared storage.
Oh, and if you need a lot of IOPS then you choose the right motherboard and splurge on the 6TB of PCIe attached solid state storage [ocztechnology.com] per BackBlaze pod for over a million IOPs over 10Gig E. If you need high IOPS and big storage you can use adaptor brackets [ocztechnology.com] and 2.5" SSDs or mix in an array of The Collossus [newegg.com], though you're reaching for a $6K/TB price point there and cutting density in half but then the SSD performance SAN has an equal multiple and some serious capacity problems. If you go with the SSD drives you would want to cut down the SAS expanders to five drives per 4x SAS link because those bad boys can almost saturate a 3Gbps link while normal consumer SATA drives you can multiply 3:1.
If you're more compute focused then a BackBlaze node with fewer drives and a dual-quad motherboard with 4 GPGPUs is a better answer. At the high end you're paying almost as much for the network switches as you are for the media. If you're into the multipath SAS thing then buy 2x the controllers and buy the right backplanes for that - but
Re: (Score:1)
But you don't buy Backblaze storage pods, right? Backblaze is an online service - they built them for themselves as I understand it.
Yes - there are excellent OSS solutions - if you can keep and maintain an engineering staff who can keep up to speed with things, and build things out, you can absolutely build out lots and lots of storage, and maintain it. Jimmy can swap drives. No problem.
The problem is - as a business grows (that's what they want to do) - this could become unmaintainable. Staffing becomes m
Re: (Score:1)
Oh, yeah. We a team need highly skilled specialists to assemble this stuff and configure it. Guys that know what attaches to which and what bandwidths and clock speeds and stuff are. Because that's all really complex and detailed. If we don't handle this ourselves we can shuffle along with much less competent people than can be found at the local voc tech, just by relying on the vendor to steer us right.
For folks who don't like OSS I did mention Windows Server, which has clustering and management just l
Re: (Score:2)
If you're doing something like OpenFiler clusters on BackBlaze 90TB 5U Storage Pods for $90/TB and 720 TB/rack you have a different point of view.
Yes. The point of view that the performance and integrity of the data storage technology is unimportant. I doubt you'll have much luck selling that to most enterprises.
Your first faulty premise is that redundancy can (and/or should) be moved into the application.
Your second faulty premise is that what works for Google works for everyone.
Re: (Score:2)
The OpenFiler argument is that Capital costs (buying a storage solution) involve more scrutiny than recurring Operating costs (staff labor.) This occurs in dysfunctional or under-captialized organizations. Of course, many people work in such organizations. So many, in fact, that the well managed and/or well capitalized organizations may actually be the exceptions.
Fundamentally, that's because for a lot of organizations it is easier to cut capital costs (by canceling or postponing) than staff costs. The problem with cutting staff? You lose the knowledge that those people have, and the chances are that they will have a lot locked up in their heads that isn't written down, no matter what policies you have in place to mitigate this. Recovering from a round of staff cuts can take years, recovering from delaying the purchase of a piece of kit for a year takes not much mor
Re: (Score:2)
I personally like how my source code doesn't randomly walk out the door, but then that's just me I guess...
Threatens to make data storage irrelevant? Hardly! (Score:2)
It's because data storage will ALWAYS be relevant (talk to any Alzheimers' patient if you don't believe me) that access speeds are a concern.
Re: (Score:2)
Data storage gives a nice place to keep everything in sync. It's NOT just about storing any old data.
Also, it simply doesn't scale. Not with the way that individuals today are consuming gigabytes every day. It only provides a benefit if multiple users are hitting the same data sources - same as any other caching scheme - and then we again run into the problem of keeping all these edge caches in sync. It absolutely doesn't scale, and will generate substantially more network traffic than hitting a cent
This does not kill the SAN (Score:1, Interesting)
I don't think the author knows much about the purpose of a SAN. A SAN is not just a disk array giving you faster access to disks. Local storage that is faster does not help you with concurrent access (clusters), rollback capability(Snapshots, mirror copies \ point in time server recovery), site recovery(off sited mirrors) or substantial data compression gain through technologies like deduplication.
As for speed, my SAN is giving me write performance in the range of 600mbytes/sec per client. I access my stora
Why not just normal RAM? (Score:2)
I mean what's the advantage of phase change memory in this scenario? If you loose power to your CPU or your system crashes, you will have effectively lost your memory content anyhow. So you might as well open your files with mmap and have lots of RAM. The system will automagically figure out what to swap to disk if RAM isn't enough as well as it will regularly backup the contents do disk.
microdisk Radio? (Score:2)
Is anyone working on micromachines (MEMS) that set vast arrays of very tiny storage discs into very tiny radio transmitters, each disc transceiving on its own very narrow frequency band? A 1cm^2 chip, perhaps stacked a dozen (or more) layers thick, delivering a couple hundred million discs per layer, each holding something like 32bits per microdisc and a GB per layer, streaming something like 2-200Tbps per layer, seek time 10ns, consuming a few centiwatts per layer.
Or skip the radio and just max out a multi
Bus speeds (Score:1)
What the author fails to realize is that the limiting factor on a SAN is most often the host itself, not the disk. A single disk my not have the IO, but an array most certainly does (depends on array). A standard, 33 MHz PCI bus can only transfer 133Mb/s (theoretical max). Even faster buses still do not match the I/O speed or throughput of a SAN.
The limiting factor on a PC is that southbridge chip, not the storage. The vast majority of the systems typically connected simply can not push the I/O fast en
Re:CD-R? (Score:5, Insightful)
Phase change memory is nothing like A CD-R. This stuff has the density of a hard drive, and the speed is very close to DRAM. It's non-volatile to boot. It's a serious contender to become universal memory.
Imagine how different operating systems and programs would be if we could make RAM non-volatile.
Re: (Score:3, Insightful)
"Imagine how different operating systems and programs would be if we could make RAM non-volatile."
Pretty much like they are now? Does anyone actually cold boot their machines anymore?
Now, if RAM were as cheap as hard disks....
Re: (Score:2)
Most people cold-boot their computers. Why? Drivers don't properly support suspend (eg. it fails), operating systems and applications leak memory and crash, and generally, the experience becomes unpleasant.
Re: (Score:1)
Re: (Score:2)
They seriously haven't gotten that figured out yet on the Windows side of the fence?
I assumed Windows, Macs and well configured Linux machines were roughly equal on that score. If it's the case that suspend isn't reliable in Windows, the solution isn't expensive new non-volatile RAM, it's getting your driver and OS manufacturer to not write sucky software.
Re: (Score:1)
Suspend works fine in Windows. XP, Vista, and 7 so far have had no issues on any of my laptops. Haven't had a desktop in quite a few years but when I did I saw no need for suspend anyway since it was always plugged in.
Whoever thinks suspend is that bad on Windows probably hasn't tried it since Windows 95 and is just a whiner.
Re: (Score:2)
Actually, there's still a lot of hardware with godawful drivers around in the Windows domain... finding a driver that won't crash the machine or cause other problems when going into or coming out of standby for every piece of hardware can be a pain in the ass, especially on older machines.
Take my Thinkpad X41 Tablet, for instance: On Windows 7, sometimes when I resume from Standby or Hibernate, I can't turn on or off my WiFi adapter any more... My old desktop system refused to go into standby with a Radeon
Re: (Score:2)
If it's so much faster, then there must be a reason it's not being so widely used in the place of DRAM. Price is probably the biggest one, I think it requires the use of four or six transistors per cell rather than one or none. I recall that it's a high power consumer.
Re: (Score:3, Interesting)
It isn't faster than DRAM, it's faster than Flash and hard disks. It is also much more expensive per MB than either: about 4 times as expensive as DRAM at the moment, and very few people are thinking of replacing their persistent storage with battery-backed DRAM.
You seem to be confusing PCRAM with SRAM. Static RAM uses six transistors, while dynamic RAM uses one transistor and one capacitor. This makes it much faster, because you don't have to wait for refresh cycles, but it is a lot less dense and so
Not really (Score:5, Insightful)
First off most non-volatile RAM isn't nearly as fast as DRAM. So let's assume you mean "what if everything were in DRAM, and that was non-volatile, it would be so much faster". Well, again not really. Faster, but there are far more bottlenecks than just disk I/O. You can go buy ramdisks now, or you could make them in your current RAM, copy the OS there, and run off that after you boot. Go try it. Firefox isn't going to render quicker, your mail isn't going to load any faster, and youtube isn't going to lag any less. If you work with large photos, most software is already going to exhaust your RAM, so (given you have sufficient quantities) you're already not losing anything.
In short, because of modern hard disk and OS caching, the ridiculous quantities of RAM these days, and a current reliance on the network for most tasks, a pure ramdisk system isn't likely to be that much better for most people. If you put a large database or maybe compile there, you would see improvement. But that's not common for most people.
Re:Not really (Score:5, Interesting)
Generally, I disagree with the statement as written. I would say that there are other LIMITS. Not bottlenecks. Although for something like video encoding you could easily turn things around and say 'Look! Your hard-drive is bottlenecked by your encoder!'. Yeah yeah. So I guess I agree more than I want to admit.
Almost by definition, there's always going to be a bottleneck somewhere in your system: the chances of ALL of your PC's components working at *exactly* 100% of their capacity is pretty close to zero. And that's for a particular task. Randomize the task and it all goes to hell. So the question we are discussing is really 'If I remove bottleneck n, how many seconds does it shave of the time to run task x?', averaged over a set of 'common' tasks. But if we made our external drives all as fast as DRAM (or whatever. as above), there would be no other single bottleneck left in the system that you could remove which would give you even a handful of percentage points of improvement. Except maybe un-installing Outlook. Or banning Subversion repositories from your enterprise environment -_-.
For most components in a PC, you have to square the performance to see a significant performance difference, all else being held equal. Tasks that lag noticeably, and that are not dramatically improved by a simple doubling of disk performance, ( 3.5ms seek, 150MB sustained transfer ) are pretty rare. Video encoding, for instance. Certainly getting more common. But with a good video card and a cheap harddrive, you're getting pretty close if not exceeding maximum write speeds on the drive while doing a CUDA rip.
I think that if Microsoft had released a little monitor that displayed the cumulative time spent blocked on [Disk|CPU|Graphics|Memory|Network] (a column in Task Manager, for instance. Hint, hint) back in Windows 95, spinning disks would be considered quaint anachronisms by now. Look at how much gamers spend on video cards, for almost no benefit.
Minute 2 of the Samsung SSD advert: http://www.youtube.com/watch?v=96dWOEa4Djs is pretty interesting, if you haven't seen it yet.
Re: (Score:1)
Re: (Score:2, Informative)
You can go buy ramdisks now, or you could make them in your current RAM, copy the OS there, and run off that after you boot. Go try it. Firefox isn't going to render quicker, ....
I have a virtual ramdisk now in XP, with a iso that loads onto it at boot. Firefox with all it's extensions and plug-ins are on it. I can tell you with certainly it loads much faster, pretty much instantly, maybe a half second. I don't really have any delays in rendering so I don't know what you are referring to there. Sure most programs are not going to run much faster (most), but they will load a hell of a lot faster. Very helpful if you close and load different stuff. it is so nice to be able to loa
Re: (Score:1)
Re: (Score:3, Informative)
Nonsense.
Certainly "everything" won't be much faster - but we're always after faster storage. I/O is a very common bottleneck. Sticking everything in RAM will make a big difference to a multi-use computer.
IT really depends on the use-case - given enough ram, and a good caching algorithm, and a simple use-case, maybe it won't help once the cache is primed (say serving static content from a fast webserver). Everything ends up in RAM anyway.
But running a system from a fast SSD, or even from a ramdisk, as you
Re: (Score:1, Troll)
MS would find a way to Bloat it all up, trust me.
Re: (Score:2)
Re: (Score:1)
SSDs are coming down every day - this would be the *next* step past SSDs. People want SSDs because it makes things faster -the same thing will (well, could) happen with this technology.
Everyone says the "Good enoughh for average joe" line every year, about every new technology.... I've tried to use Average Joe's computer often - it usually sucks.
I'd agree - buy enough ram, have a good caching mechanism, and you shouldnt' need all this new quasi-RAM like stuff - but it's semsible that at some point we can l
Re: (Score:2)
Using caps doesn't really make your argument any stronger.
WHAT do you think would be stronger, and WHY? RAM is effectively non-volatile, so long as you don't turn off the power. I never power down my notebook, so it effectively has non-volatile RAM. Ditto for my desktop at the lab.
So if you really think non-volatile RAM makes everything sooo much faster, use the sleep function on your computer.
Re: (Score:2)
> RAM is effectively non-volatile, so long as you don't turn off the power.
Yah, and my house is effectively indestructible, so long as you don't destroy it!
Re: (Score:3, Interesting)
Non-volatile? Like all the other "non-volatile RAM, instant-on" technologies that have gone before? MRAM, SRAM, Holographic storage... and now phase-change memory.
I've heard this marketing bullshit before. Call me when it's not vapourware.
Re: (Score:2)
At least holo storage made it to something concrete... but InPhase markets it as a replacement for optical storage, which is a high end market that companies shell out the big bucks for, so they can have top reliability in WORM archiving for legal reasons.
What would really be remarkable is one of these technologies making it not just to the boutique high end archiving market, but to something that can replace tape drives, ZIP drives, or USB flash drives. Enterprises would be beating down the door of a comp
Re: (Score:1)
Re: (Score:1)
I don't think that much would change: if some piece of memory is accessible like RAM--that is, it can be modified quickly, with just a single CPU instruction--then for most practical purposes it might as well be volatile memory, because a software bug could easily lead to it being completely wiped.
Re: (Score:3, Insightful)
The speed of PCM would need to closely match - or exceed - the speed of DRAM for people to adopt it as a replacement, so I doubt the model would quite be one of non-volatile RAM. I imagine it would be more like having a ridiculously fast SSD.
Given the propensity of programs to corrupt and/or leak memory, I'm not sure I'd want my system memory to be non-volatile. The dividing line between system memory and mass storage allows for robustness against errors which, without the ability to reboot, wipe the slat
Re: (Score:1)
It would be kind of awesome if files were accessed the same way as memory areas though. Kind of like if everything was transparently mmap-ed, but with the ability to grow/shrink the area, and with the option to have changes reflected immediately on the underlying medium.
file *foo = new file("/dev/pcm");
foo->append("Hello world1");
foo[11] = '!';
save foo;
(sorry for replying to my own post, forgive me mods!)
Re: (Score:3, Informative)
IBM AS/400 worked like that - the TIMI virtual machine maps all storage into a flat 128-bit address space.
Re: (Score:2)
That's a malloc-like API to storage. I'm pretty sure there is plenty of CS literature on this subject but I don't have the time to google it. I just write a few quick thoughts about it.
If storage is about as fast as RAM you can work on it as if it were RAM, build data structures on it and persist them. The OS will provide ways to share them with other programs. The storage will be a single in-memory object oriented database.
An example: an editor could persist its internal OO data representation of a text fi
Re: (Score:1)
The speed of PCM would need to closely match - or exceed - the speed of DRAM for people to adopt it as a replacement, so I doubt the model would quite be one of non-volatile RAM.
The units they're sampling out now are faster than the main memory on the four year old computer on which I'm typing this response.
The mass popularity of netbooks among both geeks and muggles indicates that fast is no longer the strongest defining feature of a computer.
Re: (Score:2)
It was going to be the non-volatile memory that would replace hard disks! Then it never happened and Flash memory kept getting cheaper and better. Kinda makes you wonder if this wasn't another trick by the Intel Capital folks to pump up the stock. Then again Ovshinsky [wikipedia.org] basically invented CD-RW/DVD-RW amorphous phase change materials and NiMH batteries so everyone figured out it could
Re: (Score:1)
Re: (Score:2)
Re: (Score:1)
I imagine that is a generous characterization.
There seem to be plenty of not-even-computer-related engineers and students here (and others too!), if someone reads me in the wrong direction.
Re: (Score:2)
Whoa, methinks someone struck a nerve.
Re: (Score:1)
You misread my intent.
Re:Names (Score:4, Funny)
Re:Plastique explosives plus hard drive (Score:4, Funny)
or just toss your HD in a forge furnace. You should get two phase changes.
Re: (Score:2)
If you have a hot enough furnace you may even get THREE phase changes.
If you put it in the LHAC you may get FOUR!
Re: (Score:2)
third state change, not third state