iSCSI Moves Toward Standard 126
EyesWideOpen writes "The iSCSI technology, which allows computers to connect to hard drives over a network connection such as a company Ethernet network or the Internet, requires only minor changes before the Internet Engineering Task Force endorses it as a formal version 1.0 standard. A final round of comments has been completed on the technology according to the Storage Networking Industry Association, the subgroup that led the creation of the iSCSI, and as a result companies now can start building iSCSI products."
Re:hum.. (Score:2)
sPh
Re:hum.. (Score:1)
I suspect that is because you don't know what a SAN is.
Re:hum.. (Score:1)
sPh
Re:hum.. (Score:1)
That's great - but do you not think you might have cut a little too far?
Re:hum.. (Score:5, Insightful)
With a file server (current buzzword is "NAS" for Network-Attached Storage) the server maintains the file system, and multiple clients connect to it to read and write files. It's a shared *file system*.
With a SAN (Storage Area Network) a bunch of raw disks is made available over a network. Currently this is normally Fiber Channel; iSCSI will bring standard Ethernet to SANs, making it much cheaper. No file system is mandated by the SAN; a machine connected to the SAN gets access to one or more raw disks and can use them any way it wants. Typically, the unit of allocation is one disk, though some systems (EMC) allow disks to be subdivided and the sub-disks handed out separately. While the storage pool on the whole is shared, each disk (or sub-disk) is only connected to one machine at a time.
A SAN provides a centrally managed pool of local disk, so you don't have to run around upgrading individual servers. This is a *big* win for large corporations.
Yum. Corrupted disks. (Score:2)
Bingo. Cheap stock gig-E cards and a driver hack on top of a classic IP stack and you can build a mainframe-reliable file server / disk farm out of commodity boxes from the local PC store.
But that network better not be connected to anything BUT the disks and the file servers' private disk-interface LAN(s), and the file servers better not have IP forwarding enabled (or have a good filter). Else one carefully corrupted packet destroys one file system. (Maybe two or so for RAID.)
Re:hum.. (Score:1)
Re:hum.. (Score:2)
Would you like to?
There are basically two types of SANs. The two types are not mutually exclusive; they can coexist on the same network.
The first type is exclusive access to shared storage. Let's say you have a big enterprise storage system, like an IBM Shark or an HDS 9960 or an EMC Symmetrix. These devices are basically giant RAIDs with fibre channel switches built right in. You can connect one computer-- PC, Unix system, supercomputer, whatever-- to each fibre channel port on the storage system, then use the storage system's software to carve it up into LUNs. Let's say the Windows server gets 5 TB, and the Oracle cluster gets 20 TB, and the compute server gets 1 TB. You create RAID sets using the storage system's control software, then assign each set (5 TB, 20 TB, 1 TB) to a fibre channel port. Each machine thinks it has a directly attached storage device, when actually it's just getting a piece of the big storage device in the basement. The point is that you can put all your eggs in one exceptionally good basket, reducing maintenance costs, and you can reconfigure things on the fly without moving any cables around. It's handy, especially in a big data center environment. You can also take advantage of some cleverness inside the storage system this way, using features like point-in-time snapshots, serverless backup, or filesystem mirroring. One data center I work with has two HDS 9960 systems, one in one city and another in another city, connected by some big pipe (OC-3? OC-12? I forget.) They run some special Hitachi software on the two storage systems that keeps the two devices in sync all the time. Basically, an atomic bomb could take out the entire data center and the city around it, but the data would be safe.
So that's one type of SAN. It's about centralizing exclusive access to shared storage. These kinds of SANs make a ton of sense under some circumstances. You generally have to have at least dozens of servers, each with their own storage requirements, before it makes sense to bother with this kind of thing.
The other type of SAN is about shared access to shared storage. This requires a special type of filesystem, like Centravision CVFS or SGI CXFS. (There are some hybrid solutions out there, like Sanergy. I haven't worked with Sanergy myself, but I've heard bad things about it.) With these SANs, each client has read-write access to the same filesystem. It's kind of like what you described-- a server with wide-open file permissions-- but without the server. Access to the filesystem is at fibre channel wire speeds, 100 MB per second or more, with really low latency. This kind of system has serious drawbacks, though. SAN or cluster filesystems are complex, and that makes them more prone to failure of some kind. Heterogeneous host support is also a challenge. Finally, SANs like this just don't scale, because of contention. If you have a hundred clients reading data from a server, the server will put the IO requests in a queue and cache them intelligently. Read some data from A, cache it and stream it out the network interface while reading some data from B, and so on. You can sustain relatively high data transfer efficiency that way, as long as your server is beefy enough. But with a shared-access SAN, there's no caching request arbitrator in the middle. There's just your computer and that other computer, giving the disks conflicting instructions. Even with the biggest, smartest RAID controller, you're still going to run into disk access contention issues pretty quickly. I've seen a shared-access filesystem grind to a halt when as few as four computers were all hitting the disks at once. The heads were spending more time seeking than they were spending reading. That's kind of a bad example, though, because that system used a really shitty RAID controller for its storage device. But it proves the principle of what I'm saying.
Because of these drawbacks, shared-access SANs really work best for server clustering. If you have a parallel cluster of servers all accessing the same database-- particularly if they're just query servers and the database is read-only-- then it makes sense to consider putting the tables on a shared-access SAN to keep storage costs low. Especially if you have ten servers and a 10 TB database; you can save 90 TB of disk by using a shared-access SAN.
So yeah, there's a huge difference between a SAN and a file server with wide-open permissions. They're different tools, and you should use them for different sorts of jobs. Anybody who tries to tell you, though, that a SAN can replace a file server in a typical network-attached storage environment doesn't know what he's talking about.
Re:hum.. (Score:2, Informative)
One example that is in my face is SAN's and the office that I am in. There is 14 offices around the world, and having one centeralized data center would make things so much easier for local office staff, and reduce costs for storage maintance. Less cost for more skilled people in the remote offices.
My $0.02
Re:hum.. (Score:1)
In one of my later posts on this topic is said "You should build your network based on requirements and budget." This is well within out requirements and would prove to save us some (10-15%) money overall.
Re:hum.. (Score:1)
iDunno, you tell me.
Re:i hear apple comming... (Score:1)
Re:i hear apple comming... (Score:1)
Just wait... (Score:3, Funny)
Jeesh.
One cable fits all. (Score:1, Interesting)
I've noticed that the convergence of data transfer cables seems to be increasing. We have Serial ATA, iSCSI, 10/100/1000 Ethernet, USB, FireWire and also HyperTransport. These are all attempts to simplify the cabling while increasing speed, with the exception that the current HyperTransport implementations are all hardwired on motherboards. Personally, I think it would make life easier if there was one thin 2 to 4 wire cable that was useable by all electronic devices both external and internal. *Sigh* It will probably be another 10 years before it's actually a single cable, if ever.
Re:One cable fits all. (Score:1)
There are WAY to many people that are not bright enought to know where to hook up the cable. You will have SCSI devices being pluged in to a Floppy port. CD ROM drives in to sound cards. You see my point?
While standardization is good, Stupid people are bad.
Re:One cable fits all. (Score:2)
We see your point, but I think you missed the OP's point. (Or at the very least, his implication.)
In a magical happy land with gumdrop houses on lollypop lane, it wouldn't matter where these bits and pieces got plugged in. Your computer would have one or more Ports on the back. Got a monitor? Plug it into a Port. Got an external drive? Plug it into a Port. Got network access? Plug it into a Port. All the Ports are the same, and figuring out which device does what is handled in software. So it doesn't matter where you plug things in.
I agree with you that it won't happen. I'm not completely sure I agree that it shouldn't. I think it probably could, but like many thing, the expense and overhead seems disproportionate to the scale of the problem.
Re:One cable fits all. (Score:1)
I did think about the OP point, but wanted to make my own. I feel like a am surrounded by MCSE's (Want to read something funny? Good: An MSCE in my office asked me how to ping a server, Need I say more?)
Re:One cable fits all. (Score:2)
You're reading my mind. I was thinking that in the instant that I hit the "Submit" button. USB does seem to have a lot of the characteristics of a universal port, with some exceptions. It has nowhere near the signal bandwidth necessary to drive a monitor, for example.
Re:One cable fits all. (Score:1)
Moment of dumb, and lazyness, USB is Universal Serial Bus, Right?
Re:One cable fits all. (Score:2)
Re:One cable fits all. (Score:1, Interesting)
What we really need is something like SGI's XIO, but foolproof, and not prone to contamination from touching the contacts...
Re:One cable fits all. (Score:2)
It has the bandwidth (SDI only requires 270 Mbps; FireWire is 400 Mbps) but I don't know if anybody has used it for uncompressed video. People use it all the time for DV-compressed video, of course.
But, as you noted, that's merely TV-resolution data. DVI, on the other hand, can handle up to 5 Gbps, if I remember correctly. That's a big difference.
What we really need is something like SGI's XIO...
No, I don't think so. XIO uses a hundred pins. No hundred-pin interface could ever be that reliable. What we really need is a super-fast serial connection, like FireWire-only-a-lot-faster. With the price of fiber optics coming down steadily, I wonder whether it would be practical to try to design a rugged two-strand cable with roughly the same diameter as a FireWire cable, or less. That effectively removes the bandwidth problem from the connector and puts it into the transceivers, where it ought to be.
Re:One cable fits all. (Score:1)
Re:One cable fits all. (Score:1)
Keyboards & Mice may be hard to find right now, but you can plug anything in with FireWire. As mentioned all those devices will work with FireWire, but one is missing: Monitors. They aren't commonly used, but televisions and monitors can accept FireWire signals. Some video monitors accept FireWire connections from cameras (Sony makes them, I believe). If the computer had particularly low power requirements, even the power cable could be FireWire... not likely, though.
Bandwidth and Protocols (Score:2)
Fiber holds some promise, but can't supply the electrical power that some cabling systems do. If you try to create a cable that has everything for everyone, it gets expensive to manufacture (try comparing the price between phone wiring, cat 5 ethernet and optical; I don't even know of a cable that has copper and optical in it).
Re:Bandwidth and Protocols (Score:2)
If there is one out there, I'm sure Sun made it.
--
Loading, please wait...
Re:One cable fits all. (Score:1)
At least that's my uninformed assessment.
iSCSI not ready for prime time (Score:4, Informative)
I work at a mid size hosting facility, and we've done quite a bit of experimentation with iSCSI. In my opition it's not ready yet. Either that or it's just a bad idea, full stop.
We do quite a bit with our SAN -- there are a coupla IBM 2105 ESS ("Shark") boxen in the back of the data center with many terabytes of disk online. It's all about Fibre Channel. At least as fast as SCSI, effectively faster when you have all sorts of cache running on the storage side, and you have the flexibility to define exactly how much disk goes to what server, and you can add more dynamically without a power down, etc.
Unfortunately, Fibre Channel is expensive. It requires expensive host bus adapters and even more expensive switches. And of course it runs over fiber optic cable, which isn't exactly penny kit. So the industry decided to try running it over Ethernet.
Now there are iSCSI-to-Fibre gateways, such as Cisco's 5420 Storage Router (which we've evaluated), but there are just problems in general with running block level storage over a TCP/IP network...That's why our iSCSI stuff is just sitting around doing nothing right now.
The only place I can see iSCSI being used at this time is for really temporary quick-and-dirty setups, such as a programmer needing another 100 GB online for a one-week project. But even then, NAS seems like a better idea.
Re:iSCSI not ready for prime time (Score:1)
However the point about them not being bootable is rather moot, i feel. For many of the reasons you yourself mention, iSCSI right now ain't gonna be what you put your system on, it's mass shared data storage, it's prone to network problems etc etc. Most servers are going to load their system from local disks and then, and only then, access the data on the iSCSI.
Just to put a bit of perspective on the 'not bootable' and 'not reliable' and 'disk before net storage driver' comments.
Re:iSCSI not ready for prime time (Score:2)
Re:iSCSI not ready for prime time (Score:1)
I agree that iSCSI is kind of a stupid idea, but all the fibre channel I have ever worked with is copper, two pair, that terminates in a D-sub connector with four pins. I didn't think anyone bothered running it over fiber anymore.
Re:iSCSI not ready for prime time (Score:1)
Re:iSCSI not ready for prime time (Score:3, Informative)
You're both wrong. FC is neither "mostly optical" or "mostly copper." Devices like HBAs and switches that use GBICs (modular media adapters) can be either optical or copper depending on the GBIC used, and switched on the fly. You choose optical or copper cables depending on your environment. Copper cables have shorter runs than optical cables-- they can only run 30 feet or so, as opposed to miles for optical-- and they much more bulky. So in a data center where you have literally hundreds of FC cables, you'd probably choose optical to keep the physical size of the cable bundles from getting out of control. For connecting two devices in a rack, you can choose copper cables.
I think the "fiber optic is expensive" thing is a myth, though. I can't say for certain, but I think I remember that the outfit that sells us our patch cables sells 4-wire copper cables and optical cables at roughly the same price.
Re:iSCSI not ready for prime time (Score:2)
Re:iSCSI not ready for prime time (Score:1)
Re:iSCSI not ready for prime time (Score:1)
Once Adaptec and other storage vendors get their act together, there will be better integrations of this technology. For instance, you have a controller card for this that has a NIC integrated into it. It has NVRAM on it that remembers connections, keys, etc. on it. You plug this card into your Fe/Ge switch. The card has a static or DHCP address, boots its own internal IP stack and presents a scsi interface to the host machine. Instead of the OS issues mentioned in the parent, you would load a block device to access the card and its all done. It can be totally transparent to the OS. Configuration could be done in software on the fly to add/remove disks.
I personally think this is wonderful technology! There will be evolutions in how you configure an infrastructure to handle this (separate switches for this on Ge)--give it a few months after IETF 1.0 draft.
Regardless of what I think, the market will decide. If it was supported at the card level now, I'd buy it today.
DFossMeister
Re:no the REAL problem is IPSEC not in it (Score:2)
If you're lucky-- without serious tweaking, I mean-- you can get 50 MB/s over gigabit ethernet. That's what I get using FTP between two SGI boxes using the SGI-approved 64-bit card and jumbo frames. Yes, this is faster than the ATA hard drive in your laptop, Chaz.
Using a single fibre channel loop, each of my lab systems gets about 95 MB/s from its RAID. (Small RAID, with [I think] 8 drives.)
Using multiple fibre channel loops, my servers pull about 400 MB/s off their RAIDs. And that's using 1 Gbps FC. If we decided to upgrade to 2 Gbps FC, we could get twice that performance, because the disks are capable of it.
There's the rub, right there. It's trivial to put a second FC adapter in your system and double your storage performance; just map a second LUN to the other port and stripe your disk accesses across both LUNs. How can you do that over iSCSI? That'd be a routing nightmare.
Re:no the REAL problem is IPSEC not in it (Score:2)
Re:no the REAL problem is IPSEC not in it (Score:2)
Can your OS handle two IPs on the same network segment? None of the ones I know of can. You see, you can only have one route to a given network. So you might have two interfaces on the same network, but all your traffic is going to go through just one of them. The other one sits there and does nothing at all.
The fact is that very few machines really need much more than 50-100MB/s because the clients arent going to be able to get data much more quickly than that anyways.
Depends on your situation. In some cases, 640 K really is enough for anybody. For the rest of us, though....
Re:no the REAL problem is IPSEC not in it (Score:1)
Can your OS handle two IPs on the same network segment? None of the ones I know of can. You see, you can only have one route to a given network. So you might have two interfaces on the same network, but all your traffic is going to go through just one of them. The other one sits there and does nothing at all.
I'm no expert, but Linux does support channel-bonding, which I think is what the poster was talking about.
Re:no the REAL problem is IPSEC not in it (Score:2)
Can somebody please tell me how this relates to iSCSI being easier to manage than SCSI over Fibre Channel? Running two separate subnets and two Ethernet drops to each client on the network sounds like a terrible way to scale.
Re:iSCSI not ready for prime time (Score:2, Insightful)
With the money that you save compaired to Fibre channel, You can afford to build redundancy. (you have that aready don't you?)
# Unlike SCSI and Fibre Channel, you can't boot from an iSCSI volume. This is because your operating system has to be loaded, and your TCP/IP stack initialized, before you can load the iSCSI driver
Personal opinion: I belive that having a single/mirrored or a small RAID array for the OS and the related software in the actual box a better idea. (KISS)
* By putting block level storage on your LAN, you've increased the capacity requirements by several orders of magnitude. To get any reasonable performance you're going to need Gigabit Ethernet everywhere -- and if you're going to make that kind of investment, you might as well be doing Fibre Channel.
Gigabit ethernet is cheeper then Fibre Channel hardware; You are not going to need Gigabit throughout each individual node on the network, but throughout the datacenter. (very reasonable, and depending on the network, commonly aready in place.)
I am not tring to attack you, but I wanted to make points that I feel are relivant. As always, budget and specs should dictate what you need for storage. I would not go out now, and purchase iSCSI when it comes out, but in a few years after some bugs have been worked out, I belive it will be a VERY viable technolgy.
iSCSI nearly ready for prime time (Score:4, Insightful)
Now consider what iSCSI offers the system admins. You can use the network boot option on the desktop systems and run them diskless. This means you can centralize your storage. No longer to you face the daily panic of a user desperate to recover a file they only saved on their local hard drive. If someone is having trouble with their system, you just give them a fresh boot image; if the problem persists, it's hardware. If I were a sysadmin, I would be pushing hard for iSCSI.
And from the technology standpoint of iSCSI vs. Fibre Channel, I expect that ethernet speeds will outpace Fibre Channel speeds; it's a larger market, so the R&D investment will go there first.
[Disclaimer: I work for a data storage company, but everything stated here is based on general observations and opinions, not insider information.]
Re:iSCSI nearly ready for prime time (Score:2)
Re:iSCSI not ready for prime time (Score:5, Insightful)
If our Brocade switches go down at work, we lose our Hitachi fiber-channel SAN, too. We also lose our StorageTek 9960. But that's a separate, redundant network, and I'm sure a properly-designed iSCSI network would be separate and redundant as well.
Unlike SCSI and Fibre Channel, you can't boot from an iSCSI volume. This is because your operating system has to be loaded, and your TCP/IP stack initialized, before you can load the iSCSI driver.
Firstly: Why would you want to? Every one of our servers that are attached to the Brocade have their own pair of internal mirrored disks for booting. What's the point of doing it any other way? I guess, if you ever truly needed to boot from an iSCSI device, those issues will be addressed by OS vendors once there's enough uptake for iSCSI.
Most operating systems want to load their storage drivers before they load their networking drivers. Doing it the other way around challenges all sorts of assumptions made by various system software out there. Sounds trivial, but again, we've evaluated it, and the result ain't pretty.
See last point made above.
By putting block level storage on your LAN, you've increased the capacity requirements by several orders of magnitude. To get any reasonable performance you're going to need Gigabit Ethernet everywhere -- and if you're going to make that kind of investment, you might as well be doing Fibre Channel.
Gigabit Ethernet is still much cheaper than FC. I can see the market they're aiming for with iSCSI, can't you?
- A.P.
Re:iSCSI not ready for prime time (Score:2)
Isn't the whole point of iSCSI that you leverage your existing investment in your network so that you're not duplicating your infrastructure? Not that some additional elements might not be added to rationalize or beef up a datacomm network to support iSCSI, but not a wholesale duplication.
Firstly: Why would you want to? Every one of our servers that are attached to the Brocade have their own pair of internal mirrored disks for booting. What's the point of doing it any other way? I guess, if you ever truly needed to boot from an iSCSI device, those issues will be addressed by OS vendors once there's enough uptake for iSCSI.
Upgrade testing? I've seen high end SAN devices that can clone/mirror/copy LUNs on the fly. If I'm booting off the SAN, I can clone my boot volume and use the copy on a test box before doing upgrades, patches or for any other kind of testing far faster than trying to build a seperate self-booting box and hope it's identical to the production machine.
Booting off the SAN also gives you the ability to replace dead hardware or upgrade hardware really fast, since you can insert a replacement box very quickly without having to worry about the OS boot volumes.
It can also speed rollouts; you setup a generic install once on a SAN LUN and when you're done you clone each time you need an additional box. The new boot LUNs can then be customized as needed.
Boot from the SAN will require iSCSI HBAs or boxes that can "own" one or more NICs and boot-config them as iSCSI HBAs, much in the same way that some on-board RAID controllers can "grab" SCSI channels as needed from the on-board SCSI controllers.
Personally I think iSCSI is cool, provided that some of the chicken-and-egg aspects of HBAs on NICs get sorted out.
Re:iSCSI not ready for prime time (Score:2)
Well, then, add an 8-port gigabit module to your 6500. If that's really all you want to do with iSCSI, it's easy enough to add to an existing network.
[Why would you want to boot off the SAN?]
Upgrade testing? I've seen high end SAN devices that can clone/mirror/copy LUNs on the fly. If I'm booting off the SAN, I can clone my boot volume and use the copy on a test box before doing upgrades, patches or for any other kind of testing far faster than trying to build a seperate self-booting box and hope it's identical to the production machine.
I can do all those things with my RS/6000 and Sun servers, using SMIT and LiveUpgrade, on the local disks. I can also clone all my machines using NIM and (the ever-shitty) JumpStart, which do all the post-install customization for me. These tools all already exist without the need for a major (SAN) hardware purchase.
- A.P.
Re:iSCSI not ready for prime time (Score:2)
You can clone your disks and boot your CPU off of the cloned disks without taking your production system offline? In other words, you have a SAN infrastrucure under a different name -- shared disks, abstracted LUNs available to other CPUs on the same bus.
Then you don't need a SAN.
Re:iSCSI not ready for prime time (Score:5, Insightful)
For one thing, it's only as reliable as your network. If you have a network problem such as a down switch/hub etc, you lose your disks immediately.
Of course a fiber channel SAN network has exactly the same properties.
Unlike SCSI and Fibre Channel, you can't boot from an iSCSI volume. This is because your operating system has to be loaded, and your TCP/IP stack initialized, before you can load the iSCSI driver. Most operating systems want to load their storage drivers before they load their networking drivers...
This is not true and has nothing to do with iSCSI but rather the iSCSI HBA. An iSCSI HBAs can have their own network stack which not only offloads the networking computes but also configures on its own.
By putting block level storage on your LAN, you've increased the capacity requirements by several orders of magnitude. To get any reasonable performance you're going to need Gigabit Ethernet everywhere -- and if you're going to make that kind of investment, you might as well be doing Fibre Channel.
Look at the figures. A 1Gb fiber channel switch costs roughly twich that of a 1GigE switch. 10GibE switchs are already available, while 10Gb FC still is being debated. The upgrade to GigE will happen naturally on a network. The cost of the switches are ammortized over the network and the switches are cheaper because they don't serve a specialized data center market.
Re:iSCSI not ready for prime time (Score:1)
Your gripes don't have anything to do with the iSCSI standard or its readiness for prime time. You misunderstand the whole purpose behind iSCSI. FibreChannel is definitely a more elegant solution, just like SCSI is a more elegant solution than the vastly more popular IDE. You you're forgetting that people need the proper tool for the job, nothing more.
When you think iSCSI, don't think FibreChannel, think VPN.
Re:iSCSI not ready for prime time (Score:1)
into an "iSCSI client PCI card" which appears to the host system as normal SCSI
controller? This would completely hide the networking aspects from the host
system, and all driver issues were solved. It could boot from such a drive, even
to an MSDOS6 prompt!
Marc
Re:iSCSI not ready for prime time (Score:1)
Network Clustering (Score:2)
You know what, maybe it wasn't the fastest but it worked!!!!! You could even boot diskless systems which would carry on running quite happily using the remote disks as though they were local. In effect, all you did was to boot a system image that used a RAM-disk to start itself. This still works on Linux and many other Unix like systems. Many systems have ways of booting from RO media. Once the NI is loaded, you can network mount the remote disks and dismount the RAM disk.
Digital effectively split up disk access using something called MSCP. It was somewhat more general than the Linux SCSI 3-layer model but it effectively split the disk access by a program or file system from a device driver. It became a trivial matter to split the communication between the levels via the net. Of course, getting a disk mounted by more than one system led to some real fun on the file system side, but that eventually worked too. You know, sometimes, you need a pool of storage that isn't mega-high speed, but where you can store a lot.
As for your comments about Gigabit Lans, well that becomes less of an issue than switching.
Ok, these days HP/Compaq/Digital use Fibre-Channel for their high-performance systems. However, the price is far from cheap. Last, I heard the NI-based clusters still work very well and as the network performance was increased, so was the remote mounted disk throughput.
I don't know how well the iSCSI people are doing, but as long as they realise that they need to fix a few other details (a standard network lock protocol would be really cool to allow two disparate systems to coordinate access).
Re:iSCSI not ready for prime time (Score:1)
All of the problems you site are not intrinsic characteristics of iSCSI. Simply put iSCSI IS A SAN, just like Fibre Channel IS A SAN. The transport protocol, frame size, management systems differ, but they both offer the same fundamental capabilities. They are both switched networks over which an embeded SCSI-like protocol is transfered between hosts and disks. Both allow multiple paths for higher bandwidth. both allow "zoning". both allow redundancy, and geographical separation.
The lower cost of iSCSI is attractive, while the somewhat lower performance is frustrating. But the killer advantage that iSCSI has over FC is the increadable number of network administrators qualified to run an ip-based network. Hardware costs are not the only costs associated with running a computer system.
Re:iSCSI not ready for prime time (Score:1)
You must be joking.
For the 4 things you've said:
Re:iSCSI not ready for prime time (Score:1)
Can I have it?
Re:iSCSI not ready for prime time (Score:1)
I beg to differ. I built a Fibre Channel storage array with 9x 9.1gb 10,000RPM Seagate drives, an HP fibre HBA, and copper cabling (CAT-5e) for $250 (but the enclosure was extra). I designed and built the interface cards for the drives for $9 each, including parts. The drives were $11 each through an eBay seller. The HBA was $20 (I got a package of 2 for $40) and supports Fiber (SM and MM) and copper through a GBIC module. I just soldered the CAT-5e cable directly to the PCI board (64-bit 66mhz, backwards compatible to 32bit 33mhz) and ran it out the backplate to the array, which is built on metal shelf pieces from the local hardware store. It looks crazy, works great, and is super fast. The capacity isn't great (80gb), but $250 is cheap considering that it sustains 100mb/sec reads and writes (try that on a SCSI drive that costs anywhere less than $500!).
Sure, that isn't an enterprise level setup - I doubt that the sysadmin will sit down to design and troubleshoot fibre interface cards in his spare time, but it does show you that fibre can be cheap.
Speaking of which, does anyone want to buy a Fibre Channel array? I'll throw in a HBA for free. Works great!
Re:But... (Score:2)
Ah, that makes sense.... (Score:1)
Anyway, it makes more sense now, and I can definitely see benefits. What we're talking about here is network-accessible storage with a very low barrier to entry, both in cost and in expertise to set up. In a way it reminds me of the Filer (1Tb filespace machine that we used via mounting NFS shares onto it) I had at my last job, but much, much less expensive and much, much easier to run.
Interesting stuff, at any rate.
Re:Ah, that makes sense.... (Score:1)
Cheaper until you actually want to make it work! You need Gigabit Ethernet and you will also need network cards with TOE (TCP Offload Engine) which makes the network cards just as expensive as Fibre Channel HBA's.
Most excellent news! (Score:3, Insightful)
If everyone had fiber into their homes, I can at the very least see harddrive upgrades without ever opening the box. Wouldn't that be nice, folks?
Possible uses (Score:3, Informative)
One exquisite use would be for someone maintaining a lab: imagine remotely partitioning and ghosting 100's of computers from a single console through Gigabit Ethernet, or being able to repartition a colocated server.
One aspect that is disappointing is that it just looks like SCSI over IP. None of the peer to peer aspects of Firewire were mentioned, such as target-disk mode [apple.com] that newer Macintoshes support. It's really nice to be able to reboot, hold 't' and plug my laptop into another Mac and have its hard disk appear on the desktop as though it was an external Firewire disk.
Re:Possible uses (Score:2)
I agree, but I really don't find that feature as useful as I thought I would years ago when it first became available. (Back then, it was SCSI target mode instead of FireWire, but it's the same thing.)
Thing is, it requires not one reboot, but two. Now, particularly with OS X 10.2 and Rendezvous, I find myself much more likely to just hook two machines together with an Ethernet patch cable and do an FTP. Since no IP configuration is necessary, it's a quick-and-easy solution. And between any combination of Power Macs and PowerBooks, you're running Gigabit Ethernet. Peppy.
I agree that FireWire target mode is cool and nice, but it's just not that useful, IMO.
Re:Possible uses (Score:2)
You would think it's not that useful until you need to copy data off a drive you can't reach because the OS install is corrupted, or you're locked out of accounts on the machine. At that point, rebooting and pressing 't' and reading the data off the drive using firewire is extremely handy.
what's wrong with smb, nfs, ftp, http? (Score:3, Insightful)
No raw disk IO (Score:3, Insightful)
There are protocols for remote storage, why not use these?
I agree that for most network storage, low-level SAN protocols are pointless - higher-level abstractions of remote disk such as smb/nfs/etc are much better as they enforce proper filesystem semantics, and run on top of a physical filesystem. You get all the advantages of having a filesystem in the first place - locking, sane disk space allocation algorithms, journaling, that sort of thing.
However, some applications - big databases particularly - prefer to have raw access to the storage medium, with no filesystem in the way to slow them down. These applications implement their own locking, sharing and space allocation semantics which are optimized for their own particular storage use patterns.
Classic file sharing protocols don't cut it for these big databases because there's no way to get raw disk access over the network with them. Which is why these lower-level SAN protocols exist - they provide the raw disk access that the big databases want, over a network. This means you can have your database spread over multiple physical locations to minimize the risk of your whole database going up in smoke, without taking the performance hit that running the database over smb/nfs would have.
You won't see iSCSI hardware making it into bog-standard file server hardware any time soon, but I can see it being huge in big-iron database servers, where it should be considerably cheaper and easier than Fibre Channel, the current best solution.
Admittedly, there are big questions over whether raw disk access is really necessary for databases - modern general-purpose filesystems are a LOT quicker than they used to be, and MySQL, for instance, which doesn't use raw disk IO but is still blazingly fast, is turning some of the performance assumptions on their head. But the big guys - Oracle, DB2 and so forth - still prefer it, so this is why iSCSI is here.
Re:No raw disk IO (Score:2)
MySQL is fast because it is simple and can lean on the OS disk cache for sequential reads. But industrial-grade databases don't like OS caches, and prefer to maintain their own caches that they can actively manage in accordance with data access patterns - the more complex SQL you can execute in Oracle means you're more likely to be pulling together data non-sequentially as far as the disk is concerned. So read-ahead and other OS level filesystem performance opimizations don't help. Oracle knows where the data is (indexes store rowids which can be decoded to block addresses) and can tell the disk where to get it more easily than sending the OS to go get a block offset from the start of a datafile.
If data does not exist in the DB cache it will have to be fetched from the disk, and when data is written it has to get to the disk before the transaction can commit - in both cases an OS cache in between the DB and the disk wastes memory and time. This is the reason for the need for "raw" access to the disk.
It's all about the bandwidth (Score:1)
Wave of the future (Score:1, Informative)
What iSCSI will allow is a single topology for how information is transferred (persistent storage, transactions, peer-to-peer linkages, etc)
In large datacenters, you currently have Fibrechannel, FICON (a form of Fibrechannel), and SCSI in your mix of persistent storage communications. This is in addition to your already large networks of ethernet, FDDI, ATM, etc.
Each require their own expensive switches.
What iSCSI will eventually provide is a single fabric for all your data traffic. This will result in a substantial cost savings both in equipment investment AND maintenance, which affects the total cost of ownership and return on investment.
This WILL need a substantial rework on the many sides of the IP networks, such as HBA's (Host Bus Adapter) that fully implement the IP stack for performance reasons. Gigabit ethernet takes a substantial amount of your CPU just doing normal transactional data.
I look forward to the long term implementation of this.
How about internal use? (Score:2)
If it really takes off, how about using iSCSI internally instead of raw SCSI? Then, all your disk interfaces could be the same.
Does the extra hardware for NICs still cost too much? (Last I heard, even raw SCSI was considered too expensive for the consumer market, so I'm probably off my rocker again.)
Apple did it. (Score:2, Funny)
Other SCSI devices? (Score:1)
anytime you read IETF is about ready to approve.. (Score:4, Insightful)
In particular, the fact that The Storage Networking Industry Association has completed its comments on the draft doesn't have any bearing whatsoever on IETF standardization.
Someone mentioned the security issue. I haven't followed the iSCSI discussions but security is definitely an issue that was identified before the group was formed, and one which is particularly difficult to solve for iSCSI because of performance concerns. I'll be interested to see how they've addressed it. I'd consider it extremely unlikely for IETF approve the standard without due consideration of security. And saying "it's going to be behind a firewall, so it doesn't have to be secure" has traditionally not been considered sufficient.
(FWIW, I'm a former IETF area director)
What about patent encumbrances? (Score:2)
Security of iSCSI (Score:3, Interesting)
I hope that iSCSI has good security measures *enabled by default*. I remember some discussion on iSCSI mailing lists about using SRP [stanford.edu] and potential intellectual property problems. I hope it's in the final standard.
Re:Security of iSCSI (Score:2)
Every facility I've worked in for the past 5 years has had a seperate network for backup, why wouldn't they all do the same for storage, if iSCSI becomes a positive value proposition?
NFS? (Score:1)
I mean it is a software solution, but it does work.
xiSCSI (Score:1)
anyone have any job openings for a linux-iSCSI-nas programmer??
Re:xiSCSI (Score:1)
Hrm only a half solution (Score:1)
Now for joe user this tech is pretty useless none of the major OS's support a multiple reader writer FS ontop of a block device (SGI has one thats part of there FS but dosent look to but part of the linux port yet but I may be wrong) Windows definatly dosent have anything for this out of the box there are solutions to do it but generaly more complicated that it's worth for a small installation or requires some big external hardware and drivers to make it work (EMC's "solution") to redirect the actial block IO of a network mount to a block device (generaly FC hardware or SCSI on some smaller setups FC is a lot more reliable though IMNSHO) This is all a TCO reduction movement that dosent make a whole lot of sence block devices get sped up buy using large buffers whereever you can shove them microcontrolers are great at doing back to back IO servers have other things to do. FC has latency issues as it's realy just a serial SCSI you can put hardware on two coasts and make it work but it's generaly not pretty iSCSI HBA's should be a lot more tollerant of latency.
HyperSCSI (Score:1)
The perfect complement... (Score:1)
Great! Move those noisy drives to the basement! (Score:1)
This tech looks like it might make diskless stations a lot more feasable around the house. Nice!
I agree that maintenance issues are greatly reduced if you can put all the drives into one hotswappable array somewhere and still get decent performance. The downside is that the drives would probably have to be spinning 24/7 in a lot of cases, might reduce their service life. Wouldn't be a big deal for something like the Elite-23's, but they are really the exception, most drives probably wouldn't last a year.
Patent (Score:1)
Re:Patent (Score:1)
They're already done. (Score:1)
Hardly. Any company seriously considering shipping a product that supports iSCSI has already been working on it for the past year and a half. I worked on a development team making an iSCSI target and initiator for Linux, and we had to suffer through major, non-backwards compatible draft releases as we tried to make iSCSI work. I guess that's why they have that disclaimer on them saying you're not supposed to use them for anything serious...
Anyway, I don't work there anymore, but I'd imaging there would only be small changes required for them to ship a fully standards-compliant iSCSI product.