Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Hardware

A Semi-Radical Approach To Avoiding fsck 116

Dru writes: "This is an article about a hardware technology that is largely unknown in the new Unix community. In theory, with this inexpensive hardware, your BSD or Linux box could start doing guranteed reboots in under 2 minutes (no fsck required) and super fast database writes. It could leapfrog all of the journaling filesystem projects as well. Yes, I wrote the article. The article is long, detailed, and mentions FreeBSD often. However, I do believe it is relevant to any other PC Unix. If enough people learn about it, maybe they will start demanding it from their favorite hardware vendor." With RAM and hard drive space both continuing to decline, I wonder how the speed / use curve for individual PCs' storage (from L1 cache to backups) will evolve. With a similar bent, Arek urges you to "take a look at our company's Solid State Disk Drives." How'dja like 8 or so gigs of DRAM next time you edit a video or burn a CD?
This discussion has been archived. No new comments can be posted.

A Semi-Radical Approach To Avoiding fsck

Comments Filter:
  • Thanks. I was about to change the url to betanews.com to read that when I saw an article with no comments. It was posted at 3:21 and I went, hey.. I might have a chance to become like all those other /. lamers and go for a first post!

    Thanks for not trolling on me for beating you to the first post, Tronster.


    ----------

  • With RAM and hard drive space both continuing to decline
    Did he mean "RAM and hard drive price both continuing to decline"?
  • I guess its a neat idea, but it seems like a hack to get around using a JFS. Maybe a Journalling FS takes some time to implement, but i run reiserfs on my laptop just fine with no problems (cept that time I forgot to include it in a kernel compile and rebooted... oops...). This is a lot of work to go through just to get around writing some software.

    Also, isn't this going to be a pretty significant load on the PCI Bus? Whats the latency on a pci transaction? It seems like you could run into all sorts of troubles there and perhaps end up slowing down your system with all the traffic..

    Or perhaps the pci bus is loads faster than I imagine?
  • If it's mission critical it should be on a UPS with controlled shutdown. No fsck on reboot.

    If you're talking about the actual in-case PSU going pop, then reboot time is the least of your worries.
  • I must question the sanity of someone who mentions "Fight Club" in a list of otherwise brilliant, distinguished works of art ;)

    And remember, he was only god as long as someone believed.
  • Can someone find a price on the 2 Gig HD from Platypus? The website provided is nonnavigable, and I cannot find a reseller that displays a price. It seems like a cool idea to have a big-ass RAM HDD, but I know it will be expensive.
  • With the current RAM prices, a product like this could become very feasible. Right now, you can get 1GB of PC133 SDRAM for under $350 (based on Pricewatch's best prices.) A single $70 256MB DIMM would be plenty for a device like this, and adding a few more DIMM slots for do-it-yourself upgrades would certainly strengthen its market appeal.

    Just something to think about for those still skeptical...

    - A.P.

    --
    * CmdrTaco is an idiot.

  • by Wakko Warner ( 324 ) on Tuesday December 26, 2000 @11:58PM (#540015) Homepage Journal
    UPS batteries only last a few years. What happens when yours fails a few months or (if you get a defective one) a few years before its expected lifetime is up? Never, ever count on any of your computer equipment, and always have as much protection as you can afford. This is one means toward that end.

    - A.P.

    --
    * CmdrTaco is an idiot.

  • by ericfitz ( 59316 ) on Wednesday December 27, 2000 @12:00AM (#540016)
    A write-caching disk controller combined with a journaling file system would give you the same benefit. You're just reinventing the wheel..

    The only really new thing here seems to be the fact that the "TRAM" is file-system aware, which is just another way of saying that you are investing in hardware which will just tie you to tired old EFS.

    Windows NT has had a journaled file system forever, and the journaling doesn't cause the major performance impact that everyone seems to think it does. Maybe someday Linus will get in the mood and allow a journaling FS into Linux.

    On a side note, what does the OS do in case of some sort of TRAM failure?
  • The website provided is nonnavigable

    Yes, it is a crapy website, after about 15 min I dicovered you click on a image to get to the 'qukdrive' info page here: http://www.platypus.net/pages/products_qikdriv2.ht ml [platypus.net].

    I havent dled the pdf, but looks like no price info. Does offer drivers tho, so probably not vaporware.


    echo $email | sed s/[A-Z]//g | rot13
  • Am i mistaken or did this article feel just to warm and fuzzy. I know there is a lot of good technical info in there but its all wrapped up in a very strange manner. I dont think you are solving the whole problem, by overlooking and waving your hands over the rest. I mean so you put a few sticks of memory and power on a PCI card! You do this cause a UPS can die, well i got news for you, the PCI Card could die too! AND if you are trying to make reboots faster, dont bother, if you are serious you would have a backup system and the same should be true for a web server dying. The only time i want fast reboots is when a good game of UT croacks on me and i want to get back to fraggin..... this is not technology i would use for mission critical apps!

    Hmmmmm... the more I think of it, the more it feels like a marketing team thought that a semi tech article on Slashdot would be just the ticket for killer web site traffic! Maybe its the lack of caffinne on my part...What do other ppl think ?
  • P.S. Red River Computer Company [redriver.com] sells them for $3,874.00 for the 1GB model and $63,275.00 for the 8GB model.


    echo $email | sed s/[A-Z]//g | rot13
  • It may not be the greatest thing in the history of the universe, but it has its place.

    Yes, serious users should have backup systems and should not be dependent on a single piece of hardware. But not all organizations or individuals can afford the backups. And it's always nice to have one more safety net.

    Just think of it as a (theoretically) cheap way to make data integrity somewhat safer, without sacrificing useful uptime. Could be useful?


    My mom is not a Karma whore!

  • by pjrc ( 134994 ) <paul@pjrc.com> on Wednesday December 27, 2000 @12:23AM (#540021) Homepage Journal
    RAM is volatile.

    Sure, a battery backup sounds like it solves this, but consider that DRAM stores its charge on tiny capacitors, and requires a controller to be performing "refresh" access cycles regularily (usually every 15.26 s). This means that not only must the battery be good, but the controller accessing the DRAM must continue providing the refresh cycles without interruption. That may sound simple, but not all DRAMs are created alike.... SDRAM DIMMs have a feature called Serial Presence Detect (SPD) that is a small non-volatile 2-wire serial EEPROM memory that hold identifying data about the size and timing parameters for the memory. A typical DRAM controller would be initialized at boot time... a card like this would require a special DRAM controller that only initialized its timings when the DRAM/battery is first installed. Perhaps the controller would be designed to use relatively slow and conservative timings, always, so it'd never be able to reinitialize to other settings (that could be wrong) and/or stop providing the critical refresh at any point.

    The point is that to retain memory, DRAM requires not only power but a properly operating controller to supply the refresh cycles. Magnetic media maintains its memory without either of these conditions. Compared to magnetic media, DRAM is very volatile. "Mission Critical" data, whatever that may be, would be existing at tiny charges on the very tiny capacitors, which could dissipate in only about 4-8 ms, if the DRAM controller doesn't perform perfectly.... inside a computer (designed as a reliable server) which has just crashed for some unknown reason!

  • P.P.S CDW [cdw.com] sells [cdw.com] the 2GB model for $5,768.55 (Special Order)


    echo $email | sed s/[A-Z]//g | rot13
  • by elprez ( 22886 ) on Wednesday December 27, 2000 @12:25AM (#540023) Homepage
    First, it is absolutely critical that the OS creates some log or structure of operations on the TRAM for filesystem operations. Basically, if the OS can mark the beginning and end of an operation and place it in this memory, you can now get a journaled meta data filesystem without a complete re-architecture of a filesystem.

    Basically, if the OS can determine the beg/end of an operation (transaction) and it logs this information, then we have a journaling file system. Any persistent storage will suffice for the journal - 'TRAM' or hard disk or clay bricks. The only difference is the access time.

    In general there is no magical way for the OS to know what data is the beg/end of a transaction. The OS could try to handle meta-data in this fashion. It can log the meta-data changes it would make in atomic transactions and replay un-commited transactions on a reboot. However, the file system still needs to be aware of this journaling.

    Consider a power failure during a commit to the file system. The file system is in a partially modified state and the transaction has not been retired from the TRAM journal (since it did not complete). When the system boots again, the TRAM journal is replayed and the same operation begins again, except this time on an inconsistent file system. The file system needs to recognize that a partially commited transaction needs to be rolled back.

    The above is based on my (very incomplete) understanding of journaling file systems. However, a TRAM card amounts to a cache for a file system journal, so in no sense is it going to replace or leap-frog journaling file systems.
  • by l1nuxn3rd ( 247121 ) on Wednesday December 27, 2000 @12:26AM (#540024)
    Here is a link to the Solid State Hard Drive Pricing Page from CDW.
    http://www.cdw.com/shop/search/results.asp?grp=HSO [cdw.com]
    Platypus products are listed as well as some from Quantum and Sandisk.
    You are talking $1,969.40 US currency for the Platypus QikDRIVE8 512MB, the smallest model i saw.
    CDW is the Authorized reseller I found for the US.
  • couldn't find any...

    and binary ones only for 2.2.17

    this is the problem with closed-source drivers for an fast-changing os like linux...

    yuck
  • Gad, if I was just patient I would collate all this info into one post, but no.

    For a list of all platypus products, and their prices at CDW [cdw.com] check out this search: http://www.cdw.com/shop/search/results.asp?key=pla typus [cdw.com]


    echo $email | sed s/[A-Z]//g | rot13
  • I don't see the point of using this as a backup as others have pointed out, the battery is likely to die like that of a UPS. So it's only job would be to backup a UPS which shouldn't die anyway.

    What seems more likely is to use it for replacements of hard disks. And this made me remember that it already is used like that. For example Electronic Organisers. But you'd want to have good battery testing and hot swapple battery so you can replace the battery while the machine is up. I wouldn't want to rely on a battery to protect my data as they aren't very failsafe.

    AussiePenguin
    Melbourne, Australia
    ICQ 19255837

  • by Anonymous Coward
    Running ReiserFS on Linux. Boots fast without a problem. Anyway, how often do you reboot anyway? One power failure in the last 6 months, and rebooted once to upgrade the kernel. ReiserFS is so rock solid and so fast that I haven't bothered replacing the bad battery in my old UPS. Don't need it. Try ReiserFS. You'll be impressed.
  • by X ( 1235 ) <x@xman.org> on Wednesday December 27, 2000 @12:42AM (#540029) Homepage Journal
    Most RAID controllers will give you a battery backed-up write-back RAM cache. Depending on how you configure it, it will say that a write is committed as soon as it's in RAM. This accomplishes the same net effect without requiring all this modification of the OS.

    Of course, lots of people don't like to configure their RAID controllers this way, because there is no redundancy for data in RAM, not to mention that the risk of failure is still higher than with a hard disk.

    I hate to say it, but that article seems like it was written by someone who has not been out in the real world.
  • RAM is VERY cheap now, too.

    But of course they want you to buy their ram, at $7,779.60 for 1GB!

    I would realy like to know if you can just slap anyold SDRAM in them or what. And also if there are any alternitives to the actual board on the market, as $1,969.40 for the board and 512MB of ram is a bit steep even to get started (assuming you could add SDRAM at market price)

    BTW I am getting these prices from http://www.cdw.com [cdw.com], I don't suppose it gets much better, but any one have other info?


    echo $email | sed s/[A-Z]//g | rot13
  • That's not too bad. I've just sorta finished a bunch of e-mails to my computer addict uncle about the possabilities something like a 15 second boot on pc's if allthe important stuff could be stored electronicly.
    it seems like there have been a few articles in the past about instant boot stuff from flash rom drives. It's totally possable with what we've got going witht he technology we've got going right now with the falling prices and all.
    something like the instant on pds's/ce devices would be nice (without the ce or course.)
  • by zmooc ( 33175 ) <zmooc@[ ]oc.net ['zmo' in gap]> on Wednesday December 27, 2000 @01:00AM (#540032) Homepage
    To the author:

    Why on earth do you want to tell us things like this Unix was designed to be simple. This means, if they found that they could do certain things as libraries in user space, then it didn't belong in the kernel.? It has absolutely nothing to do with TRAM. Actually that's true for nearly everything you say in your article; you use a lot of irrelevant examples and try to mention everything you seem to know about Unix and then explain the solution in 2 lines?! Why don't you mention the real interesting things like that such cards most probably fail just as often as UPS'es, why this should be on a PCI-card and not on the disk (ok that's because you want to access the memory directly, but please explain this...) or what the consequences are concerning access-time?

    Although the idea is good, I think you could have done a much better article; come to the point!

  • wow, if only the 8GB ones didn't cost $26,000.
  • The point is that to retain memory, DRAM requires not only power but a properly operating controller to supply the refresh cycles

    Then use Static RAM with 5ns (or lower) cycle times instead. The idea has lots of problems, but the type of memory required (which isn't specified) isn't one of them.

    Simon

  • Network Appliance has been doing this on their filers for many years now and it works very well. Although I would question using DRAM for the purpose. How would one know when their battery has failed? NetApp uses 32MB of NVRAM and a lot of other fairly commonplace technology in an interesting fashion that results in a very very fault-tolerant piece of hardware. I have a 600GB Filer (limited to 200GB volumes for back expediency) that I can pull the plug on at any time without damaging the filesystem. As a matter of fact, I have done this on occasion. Boots take a few minutes regardless of how the filer was downed.
  • Can someone find a price on the 2 Gig HD from Platypus?

    No, but I'd bet on it being lots of money. We looked at solid state drives at a previous company. Since it was for mission critical stuff, we were looking at a RAID5 array of them, and it was priced at over £200,000 for a modest sized array -- something that would cost probably a tenth of that with conventional drives.

  • The whole industry works towards less hardware to do the same job all the time. Solving a software problem by adding hardware is not the way to go, especially not when ReiserFS etc. are already present.
  • by dbarclay10 ( 70443 ) on Wednesday December 27, 2000 @02:22AM (#540038)
    You raise an implementation issue.

    The point is that to retain memory, DRAM requires not only power but a properly operating controller to supply the refresh cycles.

    Laptops run off batteries, and their memory seems okay.

    My Pilot runs okay off AAA-type batteries. Memory has been running quite well, thank you :)

    This fellow wasn't talking about slapping some RAM sticks to a breadboard and running current through the wires. Of course you would need a memory controller. Duh. :) The problems you raised were solved many years ago. If they hadn't been, nobody would be using volatile memory(like SDRAM) at all - it'd be too unreliable.

    I almost think you're just looking to spread some FUD.

    "Mission Critical" data, whatever that may be, would be existing at tiny charges on the very tiny capacitors,

    You say this like it's a bad thing! It's relied on every day. Hell, mission-critical data on its way to be written to disk is nothing but A CLUMP ELECTRONS MOVING ALONG A WIRE, in a lot of cases, those wires many times smaller than a human hair.

    Don't make a mountain out of a molehill. This isn't a bad idea, and just because they'll have to put a DRAM controller on the card doesn't make it one.

    Dave

    Barclay family motto:
    Aut agere aut mori.
    (Either action or death.)
  • He gives the explanation on why they shouldn't be on disks....

    ARRGGHH
  • by A Masquerade ( 23629 ) on Wednesday December 27, 2000 @02:27AM (#540040)
    This has been talked of for quite a time, and is hardly radical. Whats more it is not an alternative to journal based filesystems, but logically its an adjunct to them.

    First you have your filesystem that buffers transactions in a journal that is streamed to disk. Then, for performance, by avoiding all those extra seeks, you put the filesystem journal on another device - say a small fast dedicated disk. Then you make that device a NVRAM device rather than something based on spinning rust.

    Whats more, if you are interested in something like mail systems, where you get a lot of transactions that *must* committed to stable storage (although a lot of MTAs don't do that in spite of the wording of RFC821), and you use a fileystem like ext3 with a data journalling mode, then putting the journal onto NVRAM makes a huge difference - by the time it comes to the point where data would be committed to the disk from the journal, most of the data (ie e-mail messages) is now unwanted (since the messages have been delivered to final local destination or for onward transmission) and so you don't even need to do the disk ops...

    All of this is pretty much available now in ext3 other than the tools to get the journal onto a NVRAM disk - and thats just detail.

    So, nice idea, needs more flesh, a little more infrastructure needed round it.

    [Those who came to the London UKUUG Linux Conference might well have heard these discussions before going on in various corridors :-) ]
  • Fail-over clustering. Being redundant is good, good, good. If we think a few sticks of ram are going to solve inhierent file system problems we either dont understand the problem or we dont understand the technology. I see it's benifits but for some reason it feels like it should be part of the hardware architecture rather than a simple pci card with yet another buggy driver supporting it and making it all work.
  • wow, you've taken what was otherwise an interesting article, and stolen its virtue, it's purity,its virgin soil, with a stupid first comment that only inspires other idiots to reply to it.

    And don't burn karma, there is plenty of coal to go around for all, and it lights much better.

    (Forgive my sentiment, for it's late, I have had a long day, and you're an asshole)
  • Comment removed based on user account deletion
  • Comment removed based on user account deletion
  • by jmp100 ( 91421 ) on Wednesday December 27, 2000 @03:14AM (#540045)
    You'd have to implement a completely separate bus or you'd risk getting severely bogged down. You'd have to make a dedicated bus that went from the CPU to a dedicated slot and then to the hard-drive controller. Doing this on a PCI bus like that which exists today would not be particularly efficient. Certain IDE and SCSI drives talk at 100MHz and up; having disk I/O passing over the PCI bus TWICE (CPU -> TRAM -> HD) isn't the way to go, since your Ethernet, video, and other PCI devices are also competing for time on the bus.

    Of course, implementing a separate bus will take millions in research (after all, it has to be done right), but once everything is decided on, it's probably only $20 or so in extra hardware. In theory, all you'd need is another PCI bridge chip or similar. Ever seen the inside of a NetApp? The motherboard has a CPU, space for RAM, a PCI bridge, and some slots. Nothing else. Extremely simple.

  • Windows NT has had a journaled file system forever

    Unless they did som heavy changes in NTFS5, it is still log-based. Think of it as a circular buffer, usually 2-4MB size, where file system transactions are logged.

    It usually works just as well as a full journalled FS - 2 second "fsck" and a consistent fs. Under heavy disk activity you do however run the risk of exhausting the size of the log, and end up with an inconsistent file system if you crash.

    Other features of NTFS are cool, though. The fs attributes are a superset of posix and vms', so it can emulate both. It also has several metadata files, which provided 3rd parties the hooks needed to add quotas and other features to NT4.
  • I meant: why use a PCI-interface instead of implementing this as a sort of cache on the disk by adding begin/end-tags to the protocol the disk uses (IDE/SCSI)? Is that explained in the article? If so; please show me where. Thank you.

    What I was trying to say is that hardly anything was said about TRAM compared to the extensive description of how it currenty is done; all sorts of current solutions are covered, but when it comes to TRAM he just says `do it so and so' without mentioning WHY to do it that way or what the alternatives are. This was just an example of that, but I could have chosen a better one...

  • is such an idiotic idea. What he wants is a battery backed hard drive buffer. Does that take 20 pages to explain? I have the same thing right now, I call it an UPS. It does the same thing without the slowdown of a PCI slot throughput and it doesn't cost a lot. So if you're reading this, I ask the question... Why battery back a buffer module and not the whole system? And this is only for production system... Home users would still need a JFS because they sure as hell don't want to pay for this shit! How many home users backup their systems period?

    To all JFS developers, 'Nothin to see here, move along' with development.
  • Aside from some reliability issues of this technology, I wonder if it's worth the trouble.

    Besides, BeOS boots in about 15 seconds into GUI, even if you previously turned off the PC without shutting down. So, journaled filesystems DO have advantages. Linux may never achieve such high speeds in booting up, but still, I predict that a good JFS will benefit it.

  • That's why you hook the UPS up to the serial port on the machine, so when it starts to die a violent death it'll signal the system to do a clean shutdown. Of course, this is assuming you bought the nice, well-engineered UPS as opposed to the dirt cheap ones you get at Best Buy for $40!

    The nice UPS units do a few things to insure this doesn't happen:

    1. volt-o-meter on the circuit voltage
      Doesn't just wait for the source power to go out, checks the internal power for sudden drops.
    2. more than one battery
      If one dies, the next one struggles forth long enough to send the shutdown signal.
    3. use batteries that don't die
      IIRC, the rechargeables they use in those bad boys don't just die when they have a problem, they slowly fade out. Plenty of time.

    This doesn't take into account the messy situation that occurs when someone yanks the plug. But if this ever happens, it's time to get a new network engineer/sysadmin/intern.

    One last idea: plug the UPS into a UPS into the wall. Mua!

    Alakaboo

  • Puh-lease

    I can't remember the last time I've had DRAM-related hardware failure. This is across approximately 150 computers, from 80386s to the latest and greatest around. Hell I don't think I've even had trouble on the old XTs or even Commodore 64 unless I was specifically screwing with the controller.

    The controller would initialize to the settings in the DRAM's SPD and stay there. What is so difficult about that? In the case of multiple DIMMs, initialize to the slowest/most conservative of the bunch. There's no need to synchronize to the host system if you use some sort of buffering or even extend the PCI access time until the end of the refresh.

    Yeah you could go SRAM but that's VERY expensive. How about PSRAM which is used in the Palm series of handhelds? They're DRAM with SRAM timings and built-in refresh circuitry. Much cheaper but consume more power than SRAM.

    As far as crashing for no known reason, why would you have the OS RELY on the controller? Do a mem check at start and perhaps during idle times/backup times. If the memory goes for a shit, you stop using it. Code your VFS around it being there but with sane timeouts so if it dies you can recover. This kind of design technique has been around for decades.

    Christ, you make it sound like magic how this stuff works. The dynamic memory technology around has been proven over the years. I'd be more inclined to think that the battery or associated switchover circuitry would give you trouble before the DRAM ever would.

  • Then use Static RAM with 5ns (or lower) cycle times instead.

    Yeah, and quintuple your memory costs. Try PSRAM (Pseudo-SRAM) which is really DRAM in SRAM's clothing (pinouts and timings). They contain a DRAM bank and the refresh circuitry needed to keep the DRAM alive but without bothering with an external controller.

    The cost is more expensive than regular DRAM but WAAAAY cheaper than SRAM.

  • Besides, BeOS boots in about 15 seconds into GUI, even if you previously turned off the PC without shutting down. So, journaled filesystems DO have advantages. Linux may never achieve such high speeds in booting up, but still, I predict that a good JFS will benefit it.

    Actually, it does. If you just disable services like Apache, proftpd, etc. I counted this starting at kernel boot till GDM was up and running on an Athlon 700. My P166 could do the same, but without GUI (it doesn't have X installed so I couln't test).

  • The text says that there's no production-ready journaled filesystem yet for a free unix. I completely disagree with that. We use redhat linux on reiserfs on raid5 for some of our servers and it's MUCH better then normal ext2: no corrupt filesystems anymore, no 'enter root password for maintenance' when there's a fs problem anymore and it's much faster. Imho is reiserfs much more production-ready then ext2 or ufs.
  • "
    One last idea: plug the UPS into a UPS into the wall. Mua!
    "

    No! That makes the UPS nearer the computer a critical failure point.
    _Parallelise_ the UPSs, rather than serialising them.

    FatPhil

    -- Real Men Don't Use Porn. -- Morality In Media Billboards
  • Hey, around here free hard disk space is continuing to decline all the time. Waiting for one of those after Christmas deals on a new disk.
  • Isn't NTFS essentially HPFS?
    If I remember Microsoft OS/2 correctly, that is.
    I can't be sure there was theft/borrowing/evolution or whatever, but I do remember seeing a feature comparison of a whole bunch of FSs, and HPFS and NTFS had suspiciously similar features.

    FatPhil


    -- Real Men Don't Use Porn. -- Morality In Media Billboards
  • I agree.
    I could summarise the article in...
    (Use a nonvolatile disk cache)
    ... 5 words.

    OK, it's not quite what he's saying, but for the compactness I think it's pretty close.

    FatPhil
    -- Real Men Don't Use Porn. -- Morality In Media Billboards
  • It seems to me, that this idea like many things is simply an evolutionary idea as opposed to revolutionary.

    I believe that the original author's article is fundamentally correct but there is a bit too much arm-waving and it blurs the description somewhat.

    If one were to actually perform more detailed analysis on the proposal, you would probably end up having to produce a journaled or transaction logged system with all their related overhead. The difference being that the journal or xactlog buffer is a battery backup XRAM device instead of the hard-drive itself.

    This could improve performance of this part of the system significantly.

    After all, I think this is just perhaps another (better?) way of doing the same thing...

    ....Paul
  • This (being effectively part of the disk subsystem) is a comparatively low bandwidth device, there's no need for 5ns SRAM. Drop the cost by going for slower SRAMs.

    FP.
    -- Real Men Don't Use Porn. -- Morality In Media Billboards
  • The only really new thing here seems to be the fact that the "TRAM" is file-system aware,
    I'm still trying to figure out why this is a Good Thing. What is wrong with an entirely OS-independent disk controller card with battery backed write cache.

    Whenever your computer told it to write to the disk, the first thing it'd do is write each sector to the cache, and write the drive and LBA sector number to a separate section of memory. The controller could then index this structure by LBA to implement "elevator" writes, vastly improving performance with little risk of data loss.

    With enough memory, it could also be configured to permanently cache the MBR, boot loader, kernel, init scripts, daemons... Make it big enough and the whole swap partition is in there, too. Think how fast something like this could boot.

  • I remember back in '86, a manufacturer had an 8 meg RAM drive with a big gel cell backup battery. Essentially, a DRAM hard drive like you are describing for a cache. It worked on apple IIgs series computers, which usually had less than a meg of main memory, but could be expanded to 4mB. I am sure a RAM drive could be built today. I am not talking about the solid state hard drives some posters have commented on. They are using static RAM and communicating through a PCMCIA port. They have maximum throughputs of about 1MB/s sustained, which is not very useful if you are working with databases. If anything, having your OS on one of these would give amazing startup times. The apple IIgs usually booted from an 800kb floppy, and took a few minutes. Ads for the RAM drive said seconds! With a 1.2MHZ processor! Ahhh. the good ol days.

    But seriously, why the hell would you use this for a cache instead of the main drive? Why go from battery RAM -> Hard Drive -> Tape and not just batt. RAM -> tape? Espically if it is battery backed?
  • Fewer subsystems isn't always the best way to go.
    I _don't_ want onboard video and sound hardware on my motherboard. It's a server in a wardrobe. No need for Matrox this, gfx that, live! the other, 16 bit, 32Meg, MPEG nonsense. Similarly I don't want AGP, PCI _and_ ISA on my Mobo.
    So personally, I wish the whole industry _wasn't_ working towards that goal.

    I agree with your points about doing the software bit right before throwing hardware at a problem, though.

    FatPhil

    -- Real Men Don't Use Porn. -- Morality In Media Billboards
  • Cos it don't work. I said that 3 weeks ago, and they're still pestering me.
    -- Real Men Don't Use Porn. -- Morality In Media Billboards
  • Absolutely correct. This idea is ludicrous. You need hard storage back-up. The parity failure rate of DRAM vs. a hard disk should be enough to mandate this.
  • Yeah, contact sales@ininet.com for pricing information.

    As an aside to the main topic, I've been the primary US beta-tester for these QikDrives under NT/2K/Solaris/Tru64/Linux/FreeBSD. I consult to Platypus, and assist their engineering team. This is good stuff, if your application can utilize it effectively.

    I have an Ultra10 with the 2GB half-card in it now.

    -JPJ
  • What I'd like is an option in (insert OS here) to allow me to say "Now, I'm not going to add any more new hardware, or move anything around, so stop trying to detect EVERYTHING every time I start up, and write some sort of static image, and boot in 3 seconds next time".

    What is Windows (in particular) *doing* during that time? I have a processor that can do hundreds of millions of instructions per second, it can't really be actually processing during that time...
  • CDW has old/high prices... I don't think they've sold a single one. Contact sales@ininet.com for accurate info.

    (sales hat off now)
    -JPJ
  • see the Rio / Vista [umich.edu] work by Pete Chen, Dave Lowell, et al. which won best paper at SOSP several years ago...
  • Well, for the truly "Mission Critical" you get a UPS that's directly wired into the computer room wall outlets, with hot-swap components and dual parallel inverters.

    They make those, you know.

    "All those tubes and wires and careful notes!"

  • TRAM is an interesting idea. Without taking anything away from it, let me mention some work done in the 70s.

    Multics had the same problem after crashes: it took a long time to bring the system back up because the "salvager" had to check every "VTOC entry." (fsck/inode in Unix terms)

    We re-wrote the file system to do the necessary checking on each use of the directory and VTOC data, and eliminated the salvage during boot. Everything still got checked, but only as needed. Boot times went from hours to minutes, and the system was much more solid and reliable.

    See http://www.multicians.org/thvv/marking.html

  • I was at a talk by Stephen Tweedie (one of the developers on Ext3). He was saying that one of the recent things he was working on was storing the journalling data on a separate device from the hard disk where the data is to be stored.
    Initially he has tried storing the jornalling data in RAM to test performance but the plan is to store the journalling data on a NVRAM card that he was waiting to be delivered. This will increase speed of synchronous writes, like with databases and sendmail and give all the benefits of journalling.

    --
    Steven Murdoch.
  • Many RAID cards have onboard RAM. I have an AMI 1500 controller sitting next to me that takes a standard PC SDRAM DIMM as a cache chip. It has a rechargeable Li-Ion battery to keep the info on the chip persistent. It does read-ahead as well as write-back caching.

    As far as I can tell it does do what it is supposed to. In the event of a power failure, it contains the last bits of info that were sent by the OS in RAM, and will reconcile what actually had been written to disk before the machine went down.

    The problem with this, of course, is that the OS (in this case SCO openserver, not my choice, need to support legacy app) has this nasty habit, like all Unices, of caching data in system RAM by default. I've turned down or off as many cache settings as I can in the SCO kernel parameters, but there's still a little bit still going on.

    Performance is a little poorer than "normal" but the crash-reboot-fsck time is better than it was, and I've lost virtually no data with the RAID configuration, which was my goal.
    However:

    "With enough memory, it could also be configured to permanently cache the MBR, boot loader, kernel, init scripts, daemons... Make it big enough and the whole swap partition is in there, too. Think how fast something like this could boot."

    Yes, but doesn't that contradict what you said about a "entirely OS-independent disk controller card"? Seriously. Wouldn't the card have to understand whatever filesystem you were dealing with, and hence the operating system (somewhat)? The only way I can see around it is having the card remember what sectors it commonly read during the first X seconds after the first disk read. I'm not saying it's a bad idea, as I like the thought of having 20 second boots, but how would you do something like that and stay "OS independent"?

    "All those tubes and wires and careful notes!"


  • You should read this Bob The Angry Flower comic:

    http://www.angryflower.com/bobsqu.gif [angryflower.com]

    How about this TRAM stored on the disk drive, and have the OS simply tell it the dependency DAG? It can perform its own write reordering (probably more efficiently since it knows where the disk head really is and all the specifics about its geometry) and then finish off the queue when first getting power after a power loss.

  • I find it odd that in today's world, I still can't get a default distro of Linux or any free *nix with a journaling filesystem preinstalled as the default, and with a fs driver considered to be STABLE or RELEASE. Yet, Windows NT has had a full journaling filesystem since NT 3.x....

    Since NTFS is journaling, supports reparse points, extended meta data, and more, I look forward to the day when the NTFS fs driver for Linux is stable enough to boot from, then I can have one *stable* filesystem across all my disks.

    I might add that a boot-time chkdsk on a rather large partition (chkdsk on NT == fsck on Linux) takes less than 30 seconds, many times even less. Contrast that to your average ext2 or FAT32 system, which can take many minutes to check.

    -- russ

    -
    The IHA Forums [ihateapple.com]
  • >With enough memory, it could also be configured to permanently cache the MBR, boot loader, kernel, init scripts, daemons... Make it big enough and the whole swap partition is in there, too.

    uuuuh, I'm gonna upgrade the controller with 256Megs of ram to hold my Pagefile. Yeah, shure!
    And ooooh, yeah, the pagefile will be stored, even if the system fails! COOL!!!

    Man, everyone here is bothering about boot time.
    I got tears in my eyes when I discovered the "autotune" option in LILO, speeding up the boot by 6 lousy seconds. Later I decided that the boot time is rather irrelevant cause I reboot about once a week.
    Boot time is relevant to MS! Yesterday my Win95 box ...*uuuh, headache*
    I agree that controller-cache would be a good idea for Windoze boxes.

    Once an OS is up and running, it will cache everything, no need for crappy controller-cache.
    Anyway, we were talking about write-cache here, this makes good sense to speed things up!
  • Take a look at the Network Appliances devices. They write first to NVRAM and then say that the write was committed. Then they can write the data to actual disk at leisure. The NetFilers don't require any changes to the kernel or special drivers, so I don't know why the author thinks his idea would. Just implement the NVRAM onto the controller.

    Chris

  • Dru writes: "This is an article about a hardware technology that is largely unknown in the new Unix community.

    Perhaps in the UNREAD (which I guess is fairly large, hurmf) portion of the community. Chapter 8, section 2 of The Design and Implementation of the 4.4BSD Operating System talks about this idea on page 284. It referrs to research done by Moran et al, 1990. The references at the back of the chapter refer to "Breaking Through the NFS Performance Barrier," Proveedings of the Spring 1990 European UNIX Users Group Conference, pages 199-206.

    So there you go, there's TWO ways that we could have heard about this. I doubt anyone here got that first hand, but the 4.4BSD book is a fairly common book to have for those who are interested in the innards of an OS.

  • Yes, as a matter of fact, it is.

    First, it would be difficult to make those cache's (which in most cases are already there) persistent, which defeats the whole point of using it for fault tolerance.

    This is why you place it in an arbitrary open persistent memory cache.

  • I think part of it is processing the registry. It has to also load all of those .VxDs that control your hardware. Then it has to initiliaze all of it.
  • I don't know, but I think the author may have been writing to a different audience. It just didn't "sound" right when I read it.... maybe posting it to Slashdot was a bad idea for the poster. The idea of TRAM sounds rather ridiculous honestly, except in something like a mainframe or midrange system where the OS and the hardware are designed by the same people. But even so, it still wouldn't be useful for what the author intends.

    Like many other posters have noted, it seems like the author really doesn't have any real world experience with this environment, but just though "Hey, this would be useful if it worked..."
  • Seriously...do they use some special proprietary ram? 8G of cdram in 512 meg chunks is only a couple of K. Hardly justifies another $24k tacked onto it. Is this another example of "charge what the market will bear"? I understand there are development costs and the like, but _geez_ $26k is _a lot_ of money. Don't give me an answer like "they are not intended for home use, so they charge more", because that's a bullshit reason (even though it's done all the damn time).
  • One last idea: plug the UPS into a UPS into the wall. Mua

    that may have been what the manual from the cheap ups you got a best by for $40 have said. if you ever buy a nice ups, it should clearly state in the manual _NOT_ to do this-it does more harm than good. kinda like wearing 2 condoms at the same time.

    use LaTeX? want an online reference manager that
  • Yes, but doesn't that contradict what you said about a "entirely OS-independent disk controller card"?
    No. All the card has to know is that these are the first x sectors that the computer asks for when it boots. It need not (and, I submit, should not) have any knowledge whatsoever of the structure of the filesystem(s) they comprise. All you need is a way to tell the card what you wanted done: Some command that could be sent over the existing bus, but intercepted by the card, to allocate the first N sectors of boot tracks, and hard-map M cylinders for use by a particular partition...

    Both boot and swap caching would be most helpful on a [lap|palm]top machine, where epic uptimes are irrelevant. Normally, low-power RAM that is most efficient for battery-backed use is slower than the RAM we're accustomed to using. But it's still orders of magnitude faster than disk access.

  • Simple, All modern servers have at least 2 power supplies, plug each redundant (usually hot-pluggable) power supply into a different UPS.

    This is how we handle all of our critical servers.

    Now you are protected against Power Failure, UPS Failure, and Power supply failure.

    -- Keith Moore
  • The point of the exercise is to make the system more reliable.
    Exactly how does this require that the TRAM have any knowledge of the OS(es) using it?

    I am quite well aware of the issues involved in journalling transactions to disk. But once the caching controller has staged the impending writes in its battery-backed RAM,

    all the data gets backed up on a persistent medium
    just like everyone wants, because the RAM itself is (relatively) persistent. The only downside at all is failure of the battery or the RAM; if you're doing something that critical, it ought to be replicated to multiple servers anyway.
  • ECC.. Or better yet, for expensive boards, use a form of mirrored RAID (in addition to ECC). We're using incredibly inexpensive memory to work with roughly 32Meg of data (anything more is probably asking for trouble.. we're not building a caching system as was pointed out. In fact the system should stall if the buffer starts filling up). If video cards can handle twice 32Meg pipes, a "TRAM" controller should be able to as well.

    -Michael
  • One minor difference, HPFS was/is fast. NTFS runs slower than FAT32.


    -- Keith Moore
  • In RedHat, disable kudzu. Reenable when you add hardware. Simple.

    -- Keith Moore
  • I was going to bring this up. The device you are thinking of was manufactured by Applied Engineering, and actually let you put 2 ram cards into the IIgs. I was selling "Octo-Ram" ram cards at the time, capable of taking 8 Megs of memory. I populated 2 cards, created an 8MB ram disk (the IIgs could only address 8MB, even though the processor could handle 16MB) and copied 1/4 of my 20MB hard drive into it. Pretty wild what you could do with limited memory and drive space....
  • At face value I think you missed much of the point.. The OS can't know the intent of the "application". Just because I've performed a write, doesn't mean that only saving that much will have any meaning after a crash. I don't really understand how the author intends to deal with the "begin" and "end" transactions unless they're providing a journalling service (which I'm not familiar enough with). At the very minimum, however, a transaction would be the modification of an inode/directory element and then at least the initially provided data. The fsck is reduced because you'll never have a disk with only part of this info. Beyond that an explicit fsync might provide enough info to the OS to say that everything buffered up till the fsync should be part of that transaction.. I it "possible" to design in such a way that DB writes can write out everything as one piece, call fsync, and then have the OS garuntee an all or nothing write.

    If you mean to say that the write-back cache can be used for application meta-data (namely the transaction support), then that's an interesting (albeit proprietary) avenue to explore.. But I can imagine that regular flat SCSI and IDE based systems could benifit as well (since there are definately web servers / DBs out there without RAID). Why not push for a product that works with all drives and fights to become ubiquitous.

    -Michael
  • But of course they want you to buy their ram, at $7,779.60 for 1GB!

    huh? 256MB RAM costs $180, so 1GB is $720. If you are talking about a single 1GB module, that's not $7k either. Crucial [crucial.com] sells it for $2429.
    ___

  • i dont giva damn about DBs. gimme one of those cards for my linux box and ill just use it for everything needed to boot and the rest as a swap partition. this would make my linux box boot in like 5 seconds... awesome
  • Not True.

    Data ONTAP is written in house. The only parts from BSD is some of the network stack, and that has been changed heavily as well.
  • by ansible ( 9585 ) on Wednesday December 27, 2000 @08:11AM (#540095) Journal

    This is what the Network Appliance boxen do to speed NFS writes.

    All NFS write transactions are commited to NVRAM first, so that they can be acknowledged. Then the writes to disk are sorted and blasted out. Very efficient, very fast.

    It is this NVRAM (as well as using a modified RAID-4 on top of the WAFL filesystem) that makes a NetApp much faster (yet still safer) than most other NFS servers. I've often thought about creating just such an NVRAM board for a PC, so that I could do the same thing with my Linux fileservers.

    Note that the NetApp implementation caches NFS requests, not filesystem-level data. Say I'm changing 1 byte in a block. If I buffer filesystem data, I have to cache the whole block. If I'm buffering the NFS request, it'll be much smaller.

    Buffering (in NVRAM) the log data might work well for something like ext3.

  • Very roughly.

    In series:
    Source failure: protection = 2
    UPSfar failure: protection = 1
    UPSnear failure: protection = 0
    near/far measured from view of the server.

    In parallel:
    Source failure: protection = 2
    UPS failure: protection = 1

    See, there are no protection=0 scenarios except double UPS failure. (and serial is equally bad for that).

    Use parallel UPSs for _hot swappable_ UPS setups!
    It's the only truly redundant way for 24/7 systems.

    FP.


    -- Real Men Don't Use Porn. -- Morality In Media Billboards
  • (giggle) Jeezuz, do they still make this stuff? Last time I heard about this, I was speccing 2114s (read: back in the early 80s) and I ran across this "QSTAT" (quasi-static) stuff (not for 2114s, maybe 6116s?). Exactly the same thing, DRAM with a built-in controller on the chip. Le plus ca change...
  • This is an excellent question.. I was recently researching SS disks at work and found them to be much too expensive to be practical. The units from the big manufacturers (SolidData, Imperial Tech, Bitmicro) all come in at $15k-$20k for a 1G drive. Absurd.. It seems to me that building one of these is fairly simple.. Maybe a junior year EE project. In fact, the mechanics of it should be much simpler than a conventional disk drive unless I am missing something...
  • by pjrc ( 134994 ) <paul@pjrc.com> on Wednesday December 27, 2000 @09:26AM (#540099) Homepage Journal
    You say this like it's a bad thing! It's relied on every day.

    DRAM (tiny charge requiring sustained refresh operations) is relied upon during normal operation of the computer. The proposal here is to also rely upon DRAM during and after the events that lead to a crash.

    To respond specifically to your examples above, your battery supported laptop and palm pilot memory is reliable, but what happens if they crash? Is your laptop memory intact after something goes wrong? The microcontroller in a palm has no MMU, so if something does wrong, it'll be able to easily trash the memory.

    Regarding data being sent as "a clump of electrons moving along a wire" (propagation of a change in voltage potential would be more accurate)... that just simply isn't the way it's done. Communication takes place using protocols which verify that the data has been properly received. Newer ATA transfer modes use a CRC, and even with the older modes, status bits are provided to verify that the data was properly written. It would be horribly unreliable to send a "clump of electrons" and hope that the data is received and stored properly.

    Now, regarding the comment:

    I almost think you're just looking to spread some FUD.

    FUD, Fear, Uncertainty, and Doubt is a marketing tactic, generally used by an established vendor when their well-known product is inferior and more expensive, and the best way to convince a customer to buy the established product is to scare the customer away from the competitor.

    Why would I do that? I don't have any vested interest in the current practices. I'm not participating in the development of any journaled filesystems. I do a bit of freelance hardware design and small quantity sales, so if I thought this was a really good idea, I might go after making such a card and kernel patch.

    But I believe the idea is fundamentally flawed.

    During the unpredictable events that will lead to a crash, and the unpredictable behavior immediately following a crash, DRAM is going to be a much less reliable way to be holding data. It doesn't matter how well DRAM works during normal operation. DRAM has proven quite reliable, as long as the computer and memory controller operate properly.

    Even with a specially designed memory controller (as a standard one won't do), it is quite risky to rely upon DRAM during a crash. Call it FUD if you like, but DRAM just isn't a reliable place to have data when a machine crashes. You say FUD now, but if anyone were to actually make such a card, the term I'd use would be Snake Oil.

  • do you happen to know to what extend BSD code is used... and what flavor ( F/O/N BSD or BSDi ) ?
  • yeah, and this:

    Bob the angry flower: plural's [angryflower.com]

    (yeah I know plural's is wrong, but that's the name of the strip, okay?)
    --
    Slashdot didn't accept your submission? hackerheaven.org [hackerheaven.org] will!

  • There are better ways to fix this problem, such as put a battery backup on system RAM, so that the OS won't need to be reloaded at all; it can pick up where it left off when power to the CPU comes back on.

    And if computer power supplies were also designed for battery-backup (dual-voltage, can run on either 120VAC or say 24VDC) then the complexity of the UPS (converting DC to AC just so the computer can convert it back to DC again) would be eliminated, and the result would be more reliable. There should be a standard connector so an external lead-acid battery can be plugged into the back of every computer, and the power supply would be responsible for keeping it charged, and switching to using it when the power fails. (But there should be a switch to turn off the charging feature in case you have several computers hooked to one large battery with a separate external charger.) Then maybe when the power fails the OS would be notified, and it could finish doing any uncommitted writes before powering off the disks and CPU; the battery would continue to backup the memory for a very long time since it would be such a small load compared to the whole system. That way the two battery-backup systems could be combined. (Or not... maybe a separate memory backup would be more reliable.)
  • I think that's what I was getting at in my prior post. If the controller can cache my commonly accessed sectors at boot-time, that's fine. But as for "M cylinders for a partition", for traditional partitions, that's fine. For things like UnixWare, Openserver and *BSD, it'd have to know about the sub-filesystem partitions they use, would they not?

    "All those tubes and wires and careful notes!"

  • Note, no free unix today has, at least to the point of people trusting their main database on it, a production-ready journaled filesystem.

    Linux+ReiserFS.

    I would trust ReiserFS to keep my main DB safe, I've been using ReiserFS with Linux for some time now with no data loss. (and many power failures and some crashes due to a partitcular closed source XF86 video card driver)

    -- iCEBaLM
  • If you are talking about a single 1GB module, that's not $7k either.

    No, I am talking about the Platypus ram they want you to use.


    echo $email | sed s/[A-Z]//g | rot13
  • For things like UnixWare, Openserver and *BSD, it'd have to know about the sub-filesystem partitions they use, would they not?
    Not that I can see. Ultimately, all of the levels of abstraction map to commands to the HD to read/write one or more LBA sectors. If I want to reserve a specific range of sectors in the cache, all I have to do is use the OS tools that determine what those sector numbers are, and communicate that to the controller. If this means that a device driver would likely be written for a TRAM-enabled controller, sure.

    But it's important to keep these things straight: Drivers written for an OS (hopefully source code that can compile under any *nix) may well benefit from knowing about hardware. The controller never needs to know anything about the OS. (And let me reiterate, shouldn't know. Can you say "Win[Modem|Printer]?" Yuck.)

  • This sounds like a lot of standard techniques already in use in storage today. Many storage controlers (FibreChannel/SCSI, RAID/nonRAID) support battery backup power that will let them finish writes. Most use internal write and read cache and include lots of memory. Transaction support already exists in some ways. If I send a command to write 1024 bytes to a disk that does 512 byte writes, a good controller will attempt to make the 1024 byte operation atomic even though it is internally broken up into multiple 512 byte writes. File systems fragment the pieces of a file and thus have to issue multiple non-sequential commands for a given operation. This is where the problem errupts. Some controllers support combining multiple operations into one call but this is usually done without FS knowledge; it just fills a buffer of ops and then dumps it. RAID already handles the issue of non-sequential operations by hiding them. RAID may present a 1 terrabyte drive as sequential data when it is really stripped across various areas of various drives. When RAID is told to write 1 sequential GB out it is done as one operation, even though it involves many non-sequential writes to multiple locations on multiple disks. The trick is to put file system support in the storage controllers or RAID systems. With file system support, it will attempt to make sure that physically non-sequential writes that are sequential at the file level are completely written out. This doesn't happen much because for various reasons but does exist. For example, some RAID systems support running the file system on their system directly.

    What does this do that is so different? At $300, it may be less expensive than similar solutions but I do not see "how it differs from everything out there".

"Ninety percent of baseball is half mental." -- Yogi Berra

Working...