Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Hardware

iSCSI Moves Toward Standard 126

EyesWideOpen writes "The iSCSI technology, which allows computers to connect to hard drives over a network connection such as a company Ethernet network or the Internet, requires only minor changes before the Internet Engineering Task Force endorses it as a formal version 1.0 standard. A final round of comments has been completed on the technology according to the Storage Networking Industry Association, the subgroup that led the creation of the iSCSI, and as a result companies now can start building iSCSI products."
This discussion has been archived. No new comments can be posted.

iSCSI Moves Toward Standard

Comments Filter:
  • by the Man in Black ( 102634 ) <jasonrashaad@@@gmail...com> on Friday September 06, 2002 @08:40AM (#4206024) Homepage
    ...I give it a week or two before someone buys a patent for "Accessing digital storage devices via a network" and sues.

    Jeesh.
  • I've noticed that the convergence of data transfer cables seems to be increasing. We have Serial ATA, iSCSI, 10/100/1000 Ethernet, USB, FireWire and also HyperTransport. These are all attempts to simplify the cabling while increasing speed, with the exception that the current HyperTransport implementations are all hardwired on motherboards. Personally, I think it would make life easier if there was one thin 2 to 4 wire cable that was useable by all electronic devices both external and internal. *Sigh* It will probably be another 10 years before it's actually a single cable, if ever.

    • This is not going to happen. (or at least should not)

      There are WAY to many people that are not bright enought to know where to hook up the cable. You will have SCSI devices being pluged in to a Floppy port. CD ROM drives in to sound cards. You see my point?

      While standardization is good, Stupid people are bad.

      • There are WAY to many people that are not bright enought to know where to hook up the cable. You will have SCSI devices being pluged in to a Floppy port. CD ROM drives in to sound cards. You see my point?

        We see your point, but I think you missed the OP's point. (Or at the very least, his implication.)

        In a magical happy land with gumdrop houses on lollypop lane, it wouldn't matter where these bits and pieces got plugged in. Your computer would have one or more Ports on the back. Got a monitor? Plug it into a Port. Got an external drive? Plug it into a Port. Got network access? Plug it into a Port. All the Ports are the same, and figuring out which device does what is handled in software. So it doesn't matter where you plug things in.

        I agree with you that it won't happen. I'm not completely sure I agree that it shouldn't. I think it probably could, but like many thing, the expense and overhead seems disproportionate to the scale of the problem.
        • I agree with you on your point about many of the same port automatically adjusting to the service/protocol needed for the device at the end. In fact there is a standard for this, USB. (Universal) It supports almost everything that can be added, although currently does not support the bandwidth requirements for some peripherals.

          I did think about the OP point, but wanted to make my own. I feel like a am surrounded by MCSE's (Want to read something funny? Good: An MSCE in my office asked me how to ping a server, Need I say more?)

          • In fact there is a standard for this, USB. (Universal) It supports almost everything that can be added, although currently does not support the bandwidth requirements for some peripherals.

            You're reading my mind. I was thinking that in the instant that I hit the "Submit" button. USB does seem to have a lot of the characteristics of a universal port, with some exceptions. It has nowhere near the signal bandwidth necessary to drive a monitor, for example.
            • Yea, i am feeling kinda smart today considering i spent the majority of my evening last night replacing the rotors and brake pads on my car, then washing and waxing it, and finally at 23:30 when i was done, i played GTA3 for about 4 hours and was late to work.

              Moment of dumb, and lazyness, USB is Universal Serial Bus, Right?

            • That's why you have a tablet that runs RDP/X over 802.11b. Or a. The only problem is games and video . . . well, it was a nice idea. I don't forsee video being able to be carried over anything but multichannel audio/signalled digital/fiber cable in the near future.
              • by Anonymous Coward
                Um, dosen't FireWire already have the capacity to transmit uncompressed digital video? Granted, it's limited to NTSC, PAL and SECAM resolutions, but future incarnations could possibly handle high resolution (eg. monitor scale) digital video, and multichannel audio at the same time.

                What we really need is something like SGI's XIO, but foolproof, and not prone to contamination from touching the contacts...
                • Um, dosen't FireWire already have the capacity to transmit uncompressed digital video?

                  It has the bandwidth (SDI only requires 270 Mbps; FireWire is 400 Mbps) but I don't know if anybody has used it for uncompressed video. People use it all the time for DV-compressed video, of course.

                  But, as you noted, that's merely TV-resolution data. DVI, on the other hand, can handle up to 5 Gbps, if I remember correctly. That's a big difference.

                  What we really need is something like SGI's XIO...

                  No, I don't think so. XIO uses a hundred pins. No hundred-pin interface could ever be that reliable. What we really need is a super-fast serial connection, like FireWire-only-a-lot-faster. With the price of fiber optics coming down steadily, I wonder whether it would be practical to try to design a rugged two-strand cable with roughly the same diameter as a FireWire cable, or less. That effectively removes the bandwidth problem from the connector and puts it into the transceivers, where it ought to be.
      • Umm...let's see now. You can plug in a hard drive, ethernet hub, camera, scanner, mouse and keyboard into your USB port. Firewire can handle all that and more, and at higher speeds. And your concern was...what? That your PeeCee might start to look like a Mac in the back? As Steve Martin put it: Well excuuuuuse me.
        • FireWire is really a good solution for the One Port To Connect Them All.

          Keyboards & Mice may be hard to find right now, but you can plug anything in with FireWire. As mentioned all those devices will work with FireWire, but one is missing: Monitors. They aren't commonly used, but televisions and monitors can accept FireWire signals. Some video monitors accept FireWire connections from cameras (Sony makes them, I believe). If the computer had particularly low power requirements, even the power cable could be FireWire... not likely, though. :)
    • Some devices require too much bandwidth to use wires without some serious shielding. Any protocol that uses extreme bandwidth would require better shielding than those that comunicate at slower reletive speeds.

      Fiber holds some promise, but can't supply the electrical power that some cabling systems do. If you try to create a cable that has everything for everyone, it gets expensive to manufacture (try comparing the price between phone wiring, cat 5 ethernet and optical; I don't even know of a cable that has copper and optical in it).

    • Well, iSCSI is a protocol, not a cabling specification.

      At least that's my uninformed assessment.
  • by IGnatius T Foobar ( 4328 ) on Friday September 06, 2002 @08:48AM (#4206068) Homepage Journal

    I work at a mid size hosting facility, and we've done quite a bit of experimentation with iSCSI. In my opition it's not ready yet. Either that or it's just a bad idea, full stop.

    We do quite a bit with our SAN -- there are a coupla IBM 2105 ESS ("Shark") boxen in the back of the data center with many terabytes of disk online. It's all about Fibre Channel. At least as fast as SCSI, effectively faster when you have all sorts of cache running on the storage side, and you have the flexibility to define exactly how much disk goes to what server, and you can add more dynamically without a power down, etc.

    Unfortunately, Fibre Channel is expensive. It requires expensive host bus adapters and even more expensive switches. And of course it runs over fiber optic cable, which isn't exactly penny kit. So the industry decided to try running it over Ethernet.

    Now there are iSCSI-to-Fibre gateways, such as Cisco's 5420 Storage Router (which we've evaluated), but there are just problems in general with running block level storage over a TCP/IP network...
    • For one thing, it's only as reliable as your network. If you have a network problem such as a down switch/hub etc, you lose your disks immediately.
    • Unlike SCSI and Fibre Channel, you can't boot from an iSCSI volume. This is because your operating system has to be loaded, and your TCP/IP stack initialized, before you can load the iSCSI driver.
    • Most operating systems want to load their storage drivers before they load their networking drivers. Doing it the other way around challenges all sorts of assumptions made by various system software out there. Sounds trivial, but again, we've evaluated it, and the result ain't pretty.
    • By putting block level storage on your LAN, you've increased the capacity requirements by several orders of magnitude. To get any reasonable performance you're going to need Gigabit Ethernet everywhere -- and if you're going to make that kind of investment, you might as well be doing Fibre Channel.

    That's why our iSCSI stuff is just sitting around doing nothing right now.

    The only place I can see iSCSI being used at this time is for really temporary quick-and-dirty setups, such as a programmer needing another 100 GB online for a one-week project. But even then, NAS seems like a better idea.

    • I'd agree with almost all of what you say. iSCSI for a number of reasons isn't going to be what you boot your OS from (notably as you say it moves reliability problems to the network, and we all know how well they work:) ).

      However the point about them not being bootable is rather moot, i feel. For many of the reasons you yourself mention, iSCSI right now ain't gonna be what you put your system on, it's mass shared data storage, it's prone to network problems etc etc. Most servers are going to load their system from local disks and then, and only then, access the data on the iSCSI.

      Just to put a bit of perspective on the 'not bootable' and 'not reliable' and 'disk before net storage driver' comments.
      • You may need a TCP/IP stack loaded to access iSCSI right now, but if it's an important adoption factor, how long do you think it would be until a NIC comes out with TCP/IP built-in, allowing the card to handle the communication for booting iSCSI?
    • And of course it runs over fiber optic cable, which isn't exactly penny kit. So the industry decided to try running it over Ethernet.

      I agree that iSCSI is kind of a stupid idea, but all the fibre channel I have ever worked with is copper, two pair, that terminates in a D-sub connector with four pins. I didn't think anyone bothered running it over fiber anymore.
      • Are you kidding? When was the last time you used Fibre Channel? Its mostly optical now. All the new HBA's come with optical GBIC's.
        • Are you kidding? When was the last time you used Fibre Channel? Its mostly optical now. All the new HBA's come with optical GBIC's.

          You're both wrong. FC is neither "mostly optical" or "mostly copper." Devices like HBAs and switches that use GBICs (modular media adapters) can be either optical or copper depending on the GBIC used, and switched on the fly. You choose optical or copper cables depending on your environment. Copper cables have shorter runs than optical cables-- they can only run 30 feet or so, as opposed to miles for optical-- and they much more bulky. So in a data center where you have literally hundreds of FC cables, you'd probably choose optical to keep the physical size of the cable bundles from getting out of control. For connecting two devices in a rack, you can choose copper cables.

          I think the "fiber optic is expensive" thing is a myth, though. I can't say for certain, but I think I remember that the outfit that sells us our patch cables sells 4-wire copper cables and optical cables at roughly the same price.
          • I think fiber is expensive because of the adapter, not the cabling. I think the cheapest GBIC I've ever seen is $90 and that doesn't include the nic (or switch port). Fiber gig nics are easily $150, where copper gig-e nics are around $40 complete.
      • I works with multiple storage vendors' equipment and EVERYBODY tests using fiber-optic cables from the host/server to the switch/storage. There are, however, a couple of storage vendors who run copper from the storage controllers to the actual disks. The main issue with copper is length -- fiber-optic cables are the only media which can actually attain the distances specificed in the specification. (see the FC-PH spec for details)
    • There are many possibilies for this that aren't available yet. Adaptec was mentioned as building an ASIC to handle this, but they are behind due to the IPsec requirement.

      Once Adaptec and other storage vendors get their act together, there will be better integrations of this technology. For instance, you have a controller card for this that has a NIC integrated into it. It has NVRAM on it that remembers connections, keys, etc. on it. You plug this card into your Fe/Ge switch. The card has a static or DHCP address, boots its own internal IP stack and presents a scsi interface to the host machine. Instead of the OS issues mentioned in the parent, you would load a block device to access the card and its all done. It can be totally transparent to the OS. Configuration could be done in software on the fly to add/remove disks.

      I personally think this is wonderful technology! There will be evolutions in how you configure an infrastructure to handle this (separate switches for this on Ge)--give it a few months after IETF 1.0 draft.

      Regardless of what I think, the market will decide. If it was supported at the card level now, I'd buy it today.

      DFossMeister
    • * For one thing, it's only as reliable as your network. If you have a network problem such as a down switch/hub etc, you lose your disks immediately.

      With the money that you save compaired to Fibre channel, You can afford to build redundancy. (you have that aready don't you?)

      # Unlike SCSI and Fibre Channel, you can't boot from an iSCSI volume. This is because your operating system has to be loaded, and your TCP/IP stack initialized, before you can load the iSCSI driver

      Personal opinion: I belive that having a single/mirrored or a small RAID array for the OS and the related software in the actual box a better idea. (KISS)

      * By putting block level storage on your LAN, you've increased the capacity requirements by several orders of magnitude. To get any reasonable performance you're going to need Gigabit Ethernet everywhere -- and if you're going to make that kind of investment, you might as well be doing Fibre Channel.

      Gigabit ethernet is cheeper then Fibre Channel hardware; You are not going to need Gigabit throughout each individual node on the network, but throughout the datacenter. (very reasonable, and depending on the network, commonly aready in place.)

      I am not tring to attack you, but I wanted to make points that I feel are relivant. As always, budget and specs should dictate what you need for storage. I would not go out now, and purchase iSCSI when it comes out, but in a few years after some bugs have been worked out, I belive it will be a VERY viable technolgy.

    • by crow ( 16139 ) on Friday September 06, 2002 @09:09AM (#4206192) Homepage Journal
      We're starting to see PCs ship with 10/100/Gig ethernet standard. Within a year or two, it won't be unreasonable to run GigE to every desktop in the building.

      Now consider what iSCSI offers the system admins. You can use the network boot option on the desktop systems and run them diskless. This means you can centralize your storage. No longer to you face the daily panic of a user desperate to recover a file they only saved on their local hard drive. If someone is having trouble with their system, you just give them a fresh boot image; if the problem persists, it's hardware. If I were a sysadmin, I would be pushing hard for iSCSI.

      And from the technology standpoint of iSCSI vs. Fibre Channel, I expect that ethernet speeds will outpace Fibre Channel speeds; it's a larger market, so the R&D investment will go there first.

      [Disclaimer: I work for a data storage company, but everything stated here is based on general observations and opinions, not insider information.]
      • What does iSCSI offer that makes it better than, say, NFS? I mean, can't you network boot with NFS just as easily as iSCSI? And isn't the filesystem layer a better place to have the network transparency than the close-to-the-hardware SCSI layer?
    • by Wakko Warner ( 324 ) on Friday September 06, 2002 @09:15AM (#4206215) Homepage Journal
      For one thing, it's only as reliable as your network. If you have a network problem such as a down switch/hub etc, you lose your disks immediately.

      If our Brocade switches go down at work, we lose our Hitachi fiber-channel SAN, too. We also lose our StorageTek 9960. But that's a separate, redundant network, and I'm sure a properly-designed iSCSI network would be separate and redundant as well.

      Unlike SCSI and Fibre Channel, you can't boot from an iSCSI volume. This is because your operating system has to be loaded, and your TCP/IP stack initialized, before you can load the iSCSI driver.

      Firstly: Why would you want to? Every one of our servers that are attached to the Brocade have their own pair of internal mirrored disks for booting. What's the point of doing it any other way? I guess, if you ever truly needed to boot from an iSCSI device, those issues will be addressed by OS vendors once there's enough uptake for iSCSI.

      Most operating systems want to load their storage drivers before they load their networking drivers. Doing it the other way around challenges all sorts of assumptions made by various system software out there. Sounds trivial, but again, we've evaluated it, and the result ain't pretty.

      See last point made above.

      By putting block level storage on your LAN, you've increased the capacity requirements by several orders of magnitude. To get any reasonable performance you're going to need Gigabit Ethernet everywhere -- and if you're going to make that kind of investment, you might as well be doing Fibre Channel.

      Gigabit Ethernet is still much cheaper than FC. I can see the market they're aiming for with iSCSI, can't you?

      - A.P.
      • But that's a separate, redundant network, and I'm sure a properly-designed iSCSI network would be separate and redundant as well.

        Isn't the whole point of iSCSI that you leverage your existing investment in your network so that you're not duplicating your infrastructure? Not that some additional elements might not be added to rationalize or beef up a datacomm network to support iSCSI, but not a wholesale duplication.

        Firstly: Why would you want to? Every one of our servers that are attached to the Brocade have their own pair of internal mirrored disks for booting. What's the point of doing it any other way? I guess, if you ever truly needed to boot from an iSCSI device, those issues will be addressed by OS vendors once there's enough uptake for iSCSI.

        Upgrade testing? I've seen high end SAN devices that can clone/mirror/copy LUNs on the fly. If I'm booting off the SAN, I can clone my boot volume and use the copy on a test box before doing upgrades, patches or for any other kind of testing far faster than trying to build a seperate self-booting box and hope it's identical to the production machine.

        Booting off the SAN also gives you the ability to replace dead hardware or upgrade hardware really fast, since you can insert a replacement box very quickly without having to worry about the OS boot volumes.

        It can also speed rollouts; you setup a generic install once on a SAN LUN and when you're done you clone each time you need an additional box. The new boot LUNs can then be customized as needed.

        Boot from the SAN will require iSCSI HBAs or boxes that can "own" one or more NICs and boot-config them as iSCSI HBAs, much in the same way that some on-board RAID controllers can "grab" SCSI channels as needed from the on-board SCSI controllers.

        Personally I think iSCSI is cool, provided that some of the chicken-and-egg aspects of HBAs on NICs get sorted out.
        • Isn't the whole point of iSCSI that you leverage your existing investment in your network so that you're not duplicating your infrastructure? Not that some additional elements might not be added to rationalize or beef up a datacomm network to support iSCSI, but not a wholesale duplication.

          Well, then, add an 8-port gigabit module to your 6500. If that's really all you want to do with iSCSI, it's easy enough to add to an existing network.

          [Why would you want to boot off the SAN?]

          Upgrade testing? I've seen high end SAN devices that can clone/mirror/copy LUNs on the fly. If I'm booting off the SAN, I can clone my boot volume and use the copy on a test box before doing upgrades, patches or for any other kind of testing far faster than trying to build a seperate self-booting box and hope it's identical to the production machine.

          I can do all those things with my RS/6000 and Sun servers, using SMIT and LiveUpgrade, on the local disks. I can also clone all my machines using NIM and (the ever-shitty) JumpStart, which do all the post-install customization for me. These tools all already exist without the need for a major (SAN) hardware purchase.

          - A.P.
          • I can do all those things with my RS/6000 and Sun servers, using SMIT and LiveUpgrade, on the local disks.

            You can clone your disks and boot your CPU off of the cloned disks without taking your production system offline? In other words, you have a SAN infrastrucure under a different name -- shared disks, abstracted LUNs available to other CPUs on the same bus.

            Then you don't need a SAN.
    • by selectspec ( 74651 ) on Friday September 06, 2002 @09:26AM (#4206267)
      FUD alert:
      For one thing, it's only as reliable as your network. If you have a network problem such as a down switch/hub etc, you lose your disks immediately.

      Of course a fiber channel SAN network has exactly the same properties.


      Unlike SCSI and Fibre Channel, you can't boot from an iSCSI volume. This is because your operating system has to be loaded, and your TCP/IP stack initialized, before you can load the iSCSI driver. Most operating systems want to load their storage drivers before they load their networking drivers...

      This is not true and has nothing to do with iSCSI but rather the iSCSI HBA. An iSCSI HBAs can have their own network stack which not only offloads the networking computes but also configures on its own.


      By putting block level storage on your LAN, you've increased the capacity requirements by several orders of magnitude. To get any reasonable performance you're going to need Gigabit Ethernet everywhere -- and if you're going to make that kind of investment, you might as well be doing Fibre Channel.

      Look at the figures. A 1Gb fiber channel switch costs roughly twich that of a 1GigE switch. 10GibE switchs are already available, while 10Gb FC still is being debated. The upgrade to GigE will happen naturally on a network. The cost of the switches are ammortized over the network and the switches are cheaper because they don't serve a specialized data center market.

    • So don't try to boot off of it! That fixes three of your four problems right there. To fix the last issue, don't use it in a render farm. Use it as a docserver, where 100Mb is more than enough.

      Your gripes don't have anything to do with the iSCSI standard or its readiness for prime time. You misunderstand the whole purpose behind iSCSI. FibreChannel is definitely a more elegant solution, just like SCSI is a more elegant solution than the vastly more popular IDE. You you're forgetting that people need the proper tool for the job, nothing more.

      When you think iSCSI, don't think FibreChannel, think VPN.
    • Wouldn't it be possible to embedd a TCP/IP stack together with ethernet hardware
      into an "iSCSI client PCI card" which appears to the host system as normal SCSI
      controller? This would completely hide the networking aspects from the host
      system, and all driver issues were solved. It could boot from such a drive, even
      to an MSDOS6 prompt!

      Marc
    • Many years ago, I mean many (like over 12), I was working on NI-based VMS Clusters. Each system served its disks over an NI (LAN adapter) and disks were network accessible at the logical block level.

      You know what, maybe it wasn't the fastest but it worked!!!!! You could even boot diskless systems which would carry on running quite happily using the remote disks as though they were local. In effect, all you did was to boot a system image that used a RAM-disk to start itself. This still works on Linux and many other Unix like systems. Many systems have ways of booting from RO media. Once the NI is loaded, you can network mount the remote disks and dismount the RAM disk.

      Digital effectively split up disk access using something called MSCP. It was somewhat more general than the Linux SCSI 3-layer model but it effectively split the disk access by a program or file system from a device driver. It became a trivial matter to split the communication between the levels via the net. Of course, getting a disk mounted by more than one system led to some real fun on the file system side, but that eventually worked too. You know, sometimes, you need a pool of storage that isn't mega-high speed, but where you can store a lot.

      As for your comments about Gigabit Lans, well that becomes less of an issue than switching.

      Ok, these days HP/Compaq/Digital use Fibre-Channel for their high-performance systems. However, the price is far from cheap. Last, I heard the NI-based clusters still work very well and as the network performance was increased, so was the remote mounted disk throughput.

      I don't know how well the iSCSI people are doing, but as long as they realise that they need to fix a few other details (a standard network lock protocol would be really cool to allow two disparate systems to coordinate access).

    • Of course iSCSI is not ready for prime time. There are only a few HBAs available and no native iSCSI RAIDs / Disks. However, it will all likely come together very quickly.

      All of the problems you site are not intrinsic characteristics of iSCSI. Simply put iSCSI IS A SAN, just like Fibre Channel IS A SAN. The transport protocol, frame size, management systems differ, but they both offer the same fundamental capabilities. They are both switched networks over which an embeded SCSI-like protocol is transfered between hosts and disks. Both allow multiple paths for higher bandwidth. both allow "zoning". both allow redundancy, and geographical separation.

      The lower cost of iSCSI is attractive, while the somewhat lower performance is frustrating. But the killer advantage that iSCSI has over FC is the increadable number of network administrators qualified to run an ip-based network. Hardware costs are not the only costs associated with running a computer system.
    • You must be joking.

      For the 4 things you've said:

      • "It's only as reliable as your network."Why, yes of course. Just as Fibre Channel is just as reliable as your Fabric Switch and cable.
      • "You can't boot from an iSCSI volume." Probably true - yet. Probably someone said the same when CD-ROMS jumped on stage: "yeah, but you can't boot from a CD!".
      • "Most operating systems want to load their storage drivers before they load their networking drivers. Nonsense. Haven't you ever net-booted a Sun? An X-Terminal? a Linux? Come on...
      • "By putting block level storage on your LAN, you've increased the capacity requirements by several orders of magnitude. To get any reasonable performance you're going to need Gigabit Ethernet everywhere -- and if you're going to make that kind of investment, you might as well be doing Fibre Channel. Again nonsense. Everybody going iSCSI is going to use it the same way as FC: two or more GigaEthernet channels per device, a dedicated switch... Main question is that I can buy a Gigabit NIC for $60, I doubt you can beat that price with fabric. And watch out for GigaSwitches in the $200 price tag by about next year. For god's sake, I've got a 100 Mbit SWITCH over my desk, and I paid $50 for it last year!

    • "That's why our iSCSI stuff is just sitting around doing nothing right now."

      Can I have it?
    • Unfortunately, Fibre Channel is expensive. It requires expensive host bus adapters and even more expensive switches. And of course it runs over fiber optic cable, which isn't exactly penny kit. So the industry decided to try running it over Ethernet.

      I beg to differ. I built a Fibre Channel storage array with 9x 9.1gb 10,000RPM Seagate drives, an HP fibre HBA, and copper cabling (CAT-5e) for $250 (but the enclosure was extra). I designed and built the interface cards for the drives for $9 each, including parts. The drives were $11 each through an eBay seller. The HBA was $20 (I got a package of 2 for $40) and supports Fiber (SM and MM) and copper through a GBIC module. I just soldered the CAT-5e cable directly to the PCI board (64-bit 66mhz, backwards compatible to 32bit 33mhz) and ran it out the backplate to the array, which is built on metal shelf pieces from the local hardware store. It looks crazy, works great, and is super fast. The capacity isn't great (80gb), but $250 is cheap considering that it sustains 100mb/sec reads and writes (try that on a SCSI drive that costs anywhere less than $500!).

      Sure, that isn't an enterprise level setup - I doubt that the sysadmin will sit down to design and troubleshoot fibre interface cards in his spare time, but it does show you that fibre can be cheap.


      Speaking of which, does anyone want to buy a Fibre Channel array? I'll throw in a HBA for free. Works great!
  • After reading the CNet article, I still couldn't figure out why this was necessarily a great thing. So I went over to SNIA's website and read the white paper. [snia.org]

    Anyway, it makes more sense now, and I can definitely see benefits. What we're talking about here is network-accessible storage with a very low barrier to entry, both in cost and in expertise to set up. In a way it reminds me of the Filer (1Tb filespace machine that we used via mounting NFS shares onto it) I had at my last job, but much, much less expensive and much, much easier to run.

    Interesting stuff, at any rate.

    • Cheaper until you actually want to make it work! You need Gigabit Ethernet and you will also need network cards with TOE (TCP Offload Engine) which makes the network cards just as expensive as Fibre Channel HBA's.
  • by Jeppe Salvesen ( 101622 ) on Friday September 06, 2002 @08:55AM (#4206111)
    I applaud all such efforts. If it doesn't work, fine, we won't use it. But if it works, it could easily become yet another technology that is excellent for its uses. Think about this technology a little more deeply. With a bit of work, it would change the name of the game in file servers. All operating systems that support iSCSI and the FS would be able to share the harddrive. I can see some savings down the line in terms of maintenance, and reduced downtime. I hope I'm right. Now, we just need to figure out exactly how to use this technology.

    If everyone had fiber into their homes, I can at the very least see harddrive upgrades without ever opening the box. Wouldn't that be nice, folks?
  • Possible uses (Score:3, Informative)

    by Anonymous Coward on Friday September 06, 2002 @08:58AM (#4206131)
    Well, the article is useless, but this white paper [diskdrive.com] clarifies some points.

    One exquisite use would be for someone maintaining a lab: imagine remotely partitioning and ghosting 100's of computers from a single console through Gigabit Ethernet, or being able to repartition a colocated server.

    One aspect that is disappointing is that it just looks like SCSI over IP. None of the peer to peer aspects of Firewire were mentioned, such as target-disk mode [apple.com] that newer Macintoshes support. It's really nice to be able to reboot, hold 't' and plug my laptop into another Mac and have its hard disk appear on the desktop as though it was an external Firewire disk.
    • It's really nice to be able to reboot, hold 't' and plug my laptop into another Mac and have its hard disk appear on the desktop as though it was an external Firewire disk.

      I agree, but I really don't find that feature as useful as I thought I would years ago when it first became available. (Back then, it was SCSI target mode instead of FireWire, but it's the same thing.)

      Thing is, it requires not one reboot, but two. Now, particularly with OS X 10.2 and Rendezvous, I find myself much more likely to just hook two machines together with an Ethernet patch cable and do an FTP. Since no IP configuration is necessary, it's a quick-and-easy solution. And between any combination of Power Macs and PowerBooks, you're running Gigabit Ethernet. Peppy.

      I agree that FireWire target mode is cool and nice, but it's just not that useful, IMO.
      • I agree that FireWire target mode is cool and nice, but it's just not that useful, IMO.
        You would think it's not that useful until you need to copy data off a drive you can't reach because the OS install is corrupted, or you're locked out of accounts on the machine. At that point, rebooting and pressing 't' and reading the data off the drive using firewire is extremely handy.
  • by jilles ( 20976 ) on Friday September 06, 2002 @09:10AM (#4206198) Homepage
    I don't understand why it is necessary to tunnel a low level protocol like scsi over ethernet (other than to trick legacy software into remote storage). There are protocols for remote storage, why not use these?
    • No raw disk IO (Score:3, Insightful)

      by marm ( 144733 )

      There are protocols for remote storage, why not use these?

      I agree that for most network storage, low-level SAN protocols are pointless - higher-level abstractions of remote disk such as smb/nfs/etc are much better as they enforce proper filesystem semantics, and run on top of a physical filesystem. You get all the advantages of having a filesystem in the first place - locking, sane disk space allocation algorithms, journaling, that sort of thing.

      However, some applications - big databases particularly - prefer to have raw access to the storage medium, with no filesystem in the way to slow them down. These applications implement their own locking, sharing and space allocation semantics which are optimized for their own particular storage use patterns.

      Classic file sharing protocols don't cut it for these big databases because there's no way to get raw disk access over the network with them. Which is why these lower-level SAN protocols exist - they provide the raw disk access that the big databases want, over a network. This means you can have your database spread over multiple physical locations to minimize the risk of your whole database going up in smoke, without taking the performance hit that running the database over smb/nfs would have.

      You won't see iSCSI hardware making it into bog-standard file server hardware any time soon, but I can see it being huge in big-iron database servers, where it should be considerably cheaper and easier than Fibre Channel, the current best solution.

      Admittedly, there are big questions over whether raw disk access is really necessary for databases - modern general-purpose filesystems are a LOT quicker than they used to be, and MySQL, for instance, which doesn't use raw disk IO but is still blazingly fast, is turning some of the performance assumptions on their head. But the big guys - Oracle, DB2 and so forth - still prefer it, so this is why iSCSI is here.

      • and MySQL, for instance, which doesn't use raw disk IO but is still blazingly fast, is turning some of the performance assumptions on their head. But the big guys - Oracle, DB2 and so forth - still prefer it

        MySQL is fast because it is simple and can lean on the OS disk cache for sequential reads. But industrial-grade databases don't like OS caches, and prefer to maintain their own caches that they can actively manage in accordance with data access patterns - the more complex SQL you can execute in Oracle means you're more likely to be pulling together data non-sequentially as far as the disk is concerned. So read-ahead and other OS level filesystem performance opimizations don't help. Oracle knows where the data is (indexes store rowids which can be decoded to block addresses) and can tell the disk where to get it more easily than sending the OS to go get a block offset from the start of a datafile.

        If data does not exist in the DB cache it will have to be fetched from the disk, and when data is written it has to get to the disk before the transaction can commit - in both cases an OS cache in between the DB and the disk wastes memory and time. This is the reason for the need for "raw" access to the disk.
  • I can see this as being a possibility for workgroups/small to medium businesses looking to get into SAN tech, but The bandwidth would be pahtetic. Unless you had an ether segment decicated to your iSCSI the latency would be terrible. With FC, you have a dedicated Full Duplex pipe at 1Gb/sec minimum on the front side. with iSCSI, even using Gigabit Ethernet, the best bandwidth you would see is .3Gb/sec shared. I do not see this tech ever making it as a permanent large-scale solution
  • Wave of the future (Score:1, Informative)

    by Anonymous Coward
    Ultimately, this WILL be the wave of the future.

    What iSCSI will allow is a single topology for how information is transferred (persistent storage, transactions, peer-to-peer linkages, etc)

    In large datacenters, you currently have Fibrechannel, FICON (a form of Fibrechannel), and SCSI in your mix of persistent storage communications. This is in addition to your already large networks of ethernet, FDDI, ATM, etc.

    Each require their own expensive switches.

    What iSCSI will eventually provide is a single fabric for all your data traffic. This will result in a substantial cost savings both in equipment investment AND maintenance, which affects the total cost of ownership and return on investment.

    This WILL need a substantial rework on the many sides of the IP networks, such as HBA's (Host Bus Adapter) that fully implement the IP stack for performance reasons. Gigabit ethernet takes a substantial amount of your CPU just doing normal transactional data.

    I look forward to the long term implementation of this.
    • If it really takes off, how about using iSCSI internally instead of raw SCSI? Then, all your disk interfaces could be the same.

      Does the extra hardware for NICs still cost too much? (Last I heard, even raw SCSI was considered too expensive for the consumer market, so I'm probably off my rocker again.)

  • by nenolod ( 546272 )
    Apple already has an economy system known as the iMac, so wouldnt it be viable that they will also be using iSCSI for their systems?! See, iMac and iSCSI will work really well together because the first character in both names begin with the same character.
  • Could this technology be used with other SCSI devices like Scanners and optical drives? For me, on more than one occasion, it would have been nice to share a scanner over the network.
  • by keithmoore ( 106078 ) on Friday September 06, 2002 @09:31AM (#4206296) Homepage
    Anytime you read that IETF is about ready to approve something as a standard, take it with a grain of salt unless it comes from the IETF chair or the area director responsible for that group. Such statements are usually propaganda from people who are trying to encourage premature adoption, or at best they are wishful thinking. It's not unusual for working groups to produce drafts which they think are ready for approval, but which actually contain serious technical problems that need to be resolved. Fixing those problems can require months or even years.

    In particular, the fact that The Storage Networking Industry Association has completed its comments on the draft doesn't have any bearing whatsoever on IETF standardization.

    Someone mentioned the security issue. I haven't followed the iSCSI discussions but security is definitely an issue that was identified before the group was formed, and one which is particularly difficult to solve for iSCSI because of performance concerns. I'll be interested to see how they've addressed it. I'd consider it extremely unlikely for IETF approve the standard without due consideration of security. And saying "it's going to be behind a firewall, so it doesn't have to be secure" has traditionally not been considered sufficient.

    (FWIW, I'm a former IETF area director)
  • According to this article [lwn.net] at lwn.net (scroll down past SSSCA discussion to get to iSCSI discussion), the possibility exists that iSCSI could not be used by free operating systems because of patent encumbrances. Were these issues resolved since then?
  • Security of iSCSI (Score:3, Interesting)

    by XNormal ( 8617 ) on Friday September 06, 2002 @10:11AM (#4206460) Homepage
    There is an important difference between my SCSI chain and an IP network - you won't find many SCSI chains with the kinds of security threats that are quite common on networks these days. Remember that block devices live below the OS permissions level - it's deeper than root access.

    I hope that iSCSI has good security measures *enabled by default*. I remember some discussion on iSCSI mailing lists about using SRP [stanford.edu] and potential intellectual property problems. I hope it's in the final standard.
    • Smart people are going to be running iSCSI on a seperate network anyway, so what does it matter?

      Every facility I've worked in for the past 5 years has had a seperate network for backup, why wouldn't they all do the same for storage, if iSCSI becomes a positive value proposition?

  • by streak ( 23336 )
    What about a big-honking server running NFS...
    I mean it is a software solution, but it does work.
  • by Mes ( 124637 )
    I was working for IBM on this when we got hit by layoffs.. arg!

    anyone have any job openings for a linux-iSCSI-nas programmer?? ;-)
    • We would like to hear from you. We hope to add iSCSI and NDMP support to our Linux-NAS project in near future. nasproduct@yahoo.com
  • This is a great peice of tech for the right application shared storage unforunatly it's only half the solution. This can make devices cheaper by utilizing off the shelf cheap networking gear vs expensive FC switching gear (Funny that Cisco supports each of them on the same frame though) but your still only getting access to a raw disk or more hopefully a RAID of raw disks. Now for a few things this makes a ton of sence a clustered solution with redundant data centers and a big pipe (think cheap leased fiber) they can locate mirrored storage and redundant servers. Things that perform better with raw disk IO (Read Oracle) inside a clustered invirnment are going to love this especialy since data sets are getting larger by the moment but actual data use in real time is down (there is a LOT of data laying on disk not being accessed very must and going to a HSM system is becoming less and less desirable due to the increased cheapness of disks (IDE drives are just about cheaper than tape right now and that trend is increasing))

    Now for joe user this tech is pretty useless none of the major OS's support a multiple reader writer FS ontop of a block device (SGI has one thats part of there FS but dosent look to but part of the linux port yet but I may be wrong) Windows definatly dosent have anything for this out of the box there are solutions to do it but generaly more complicated that it's worth for a small installation or requires some big external hardware and drivers to make it work (EMC's "solution") to redirect the actial block IO of a network mount to a block device (generaly FC hardware or SCSI on some smaller setups FC is a lot more reliable though IMNSHO) This is all a TCO reduction movement that dosent make a whole lot of sence block devices get sped up buy using large buffers whereever you can shove them microcontrolers are great at doing back to back IO servers have other things to do. FC has latency issues as it's realy just a serial SCSI you can put hardware on two coasts and make it work but it's generaly not pretty iSCSI HBA's should be a lot more tollerant of latency.
  • FYI, HyperSCSI [nus.edu.sg] does roughly the same as iSCSI and claims to address some of its shortcomings.
  • ..to you're beowulf cluster of furbies...
  • Sounds like a good deal. Just last night I added an IBM 9.1 Gb UW-SCSI drive to my chain, giving me that, two Seagate Elite-23's, and another smaller IBM SCSI drive. When I spin all those disks up it makes a lot of racket - sounds like jet engine whine. But the drives are cheap, plentiful, and pretty fast.

    This tech looks like it might make diskless stations a lot more feasable around the house. Nice!

    I agree that maintenance issues are greatly reduced if you can put all the drives into one hotswappable array somewhere and still get decent performance. The downside is that the drives would probably have to be spinning 24/7 in a lot of cases, might reduce their service life. Wouldn't be a big deal for something like the Elite-23's, but they are really the exception, most drives probably wouldn't last a year.

    1. Submit the standard
    2. Wait several years
    3. Enforce hidden patent
    4. Profit!
  • as a result companies now can start building iSCSI products

    Hardly. Any company seriously considering shipping a product that supports iSCSI has already been working on it for the past year and a half. I worked on a development team making an iSCSI target and initiator for Linux, and we had to suffer through major, non-backwards compatible draft releases as we tried to make iSCSI work. I guess that's why they have that disclaimer on them saying you're not supposed to use them for anything serious...

    Anyway, I don't work there anymore, but I'd imaging there would only be small changes required for them to ship a fully standards-compliant iSCSI product.

Don't tell me how hard you work. Tell me how much you get done. -- James J. Ling

Working...