Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Hardware

Fibre Channel For The Masses 112

Diesel Dave writes: "Fibre Channel is an awesome technology handling serial Gigabit transmission rates of both SCSI and IP over up to 30 Meters of 2 pair Copper or 10 Kilometers of optical cable, with up to 126 hosts or devices per loop.(WOW!) The newest FC runs at 2GHz. That's up to 400 Megabytes per second in full duplex mode. The problem of course is FC is normally very expensive. However, many of the hackers out there have noticed large amounts 1GHz FC equipment is being dumped on Ebay for a song. (I purchased new 18GB Barracuda drives for $70 each!) The problem is cabling up those funky 40 pin SCA drives without buying a $3000 8 bay enclosure. After much searching I have just found a company called Cinonic Systems that is making low cost Fibre Channel drive and cable adapters that work with plain old CAT5 ethernet cable! As far as I'm concerned firewire, parallel SCSI, and Gigabit ethernet are now dead technologies." It's not all that big a device either -- probably Cinonic is not alone in selling such a thing. Rather cool to connect up hard drives with CAT5, too -- not PITA ribbon cables.
This discussion has been archived. No new comments can be posted.

Fibre Channel For The Masses

Comments Filter:
  • Just getting the optical cable installed with its various freaky components drive up the cost in a hurry.

    Perhaps if you had actually had a look at the product on the website, you would have known that this has nothing to do with optical cable. This runs over STP Cat 5 cabling.

    However, I do think that people should be more forthcoming in who they are when they flagrantly advertise their own sites.

  • by Anonymous Coward
    Well, it's not quite "plain old CAT5 ethernet cable". The Cinonic page specifies shielded cable, so existing in-wall cable may not work.
  • by BJH ( 11355 )
    As noted in this post [slashdot.org], the original article poster (Diesel Dave) has a psychosis.com mail address; the company mentioned in the article is owned by the same person as that who owns psychosis.com. I think we can presume that timothy has just been fooled into giving Diesel Dave some free advertising...

  • by James Lanfear ( 34124 ) on Monday February 19, 2001 @04:10AM (#420768)
    $ whois psychois.com
    ...
    Administrative Contact:
    Cinege, David dcinege@psychosis.com
    100 PerCenta, Notsure Blvd.
    Someplacen, FL 33300
    US
    954-661-7484
    ...
    $ whois cinonic.com
    ...
    Administrative Contact:
    Cinege, David dcinege@psychosis.com
    100 PerCenta, Notsure Blvd.
    Someplacen, FL 33300
    US
    954-661-7484
    ...

    Must have been hard registering the domain last April if he didn't hear about it until now. Interesting address, too.

  • by __aasmho4525 ( 13306 ) on Monday February 19, 2001 @04:11AM (#420769)
    and, to add to your statement, gigabit ethernet and its successors are being driven by entities very interested in *PACKET SWITCHING*, but not necessarily STREAM switching.

    one of the primary design tenets of fibre-channel was to excel at the streaming of data. the consortium's design philosophy was: stream efficiently first, worry about packet-switching later.

    given that, fibre channel is generally considered NOT THE BEST at doing general-purpose packet switching (say IP / (FC-SF or FC-AL)). it's just simply not what it was designed to do.

    saying gigabit ethernet (and all other CSMA/CD over fiber derivatives) are dead is either very ignorant, or is a beautiful example of FUD, and thus quite misleading.

    Peter
  • by SubtleNuance ( 184325 ) on Monday February 19, 2001 @04:12AM (#420770) Journal
    Here is a search for fibre channel [ebay.com] at ebay [ebay.com]

    I cannot find this "dump of 'ew 18GB Barracuda drives for $70 each'" $30 for the FC2-2DB9 [cinonic.com] and I might consider it... but right now these 'dumped drives' seem a little vapourous - could our /. editors not be maintaining their usually high level of integrity and *confirming* their stories...

    All I could find even close to what's described above was:
    36 GB IBM FCHDD [ebay.com]
    9 GB Seagate FCHDD [ebay.com]
  • thats pretty damn pathetic. preying on drooling geeks. :-(
  • by BadAsh ( 24704 ) on Monday February 19, 2001 @04:57AM (#420772)
    Let's take a look at network and disk technologies.

    First, disk technologies have been increasing in speed at 2x intervals. First there was plain SCSI (~10MB/s). Then scsi 2(~20MB/s). Then fast SCSI (~40MB/s). The Ultra SCSI (~80MB/s), now Ultra SCSI 3 (~160MB/s). (I might have misplaced the names of the scsi technologies, but the idea is the same).
    Also, let's look at FibreChannel. There was FC-25 (25MB/s), then FC-50 (no commerial organization used this, but it was 50MB/s), and currently, FC-100 is the dominant technology.
    Again, 2x intervals.

    Now let's look at ethernet. It's jumping at 10x intervals. 1Mb/s, 10Mb/s(Ethernet), 100Mb/s(FastEthernet), 1Gb/s (Gigabit, which incendentally is theoretically faster than FibreChannel... 125MB/s), and 10Gb/s is on the
    way.

    So by taking historical scaling into account, ethernet dead? Yeah right. Now that's not to say
    that you'll actually ever realize the full bandwidth of any of these technologies. You still have mechanical parts in these drives. Caching and I/O randomness can either help or hurt your performance.

  • Loser? I wouldn't say that. He's got free advertising... something every business wants. I think it's quite funny myself. I wonder how often this has occured before.
  • by hamjudo ( 64140 ) on Monday February 19, 2001 @04:14AM (#420774) Homepage Journal
    Check out Scheduled Transfer Protocol [sgi.com]. It is a protocol for talking to storage devices over "standard" network hardware. SGI was able to get 790MByte/second over Gigabyte System Network. They have better network hardware than I do...

    STP also works over gigabit ethernet hardware (but only at gigabit speeds). It will probably work over 10gigabit ethernet, when that is available in quantity.

    Why use special disk interface hardware, if network hardware has better bandwidth, latency, and is cheaper?

  • by the_tsi ( 19767 )
    And I bet this guy will want us to use fibre adapters in our linux router project boxes, too.

    (http://www.psychosis.com/linux-router/... dave cinege is one of the main developers)

    -Chris
    ...More Powerful than Otto Preminger...
  • by kelleher ( 29528 ) on Monday February 19, 2001 @05:07AM (#420776) Homepage
    Here [navy.mil] is a quick read on fibre channel including specifications for copper wire, coaxial wire, multi-mode fiber, and single-mode fiber. For more info, check the Fibre Channel Industry Association [fibrechannel.com] site.

    Come on people! Do your homework before you start whining!

  • by gascsd ( 316132 ) on Monday February 19, 2001 @05:08AM (#420777)
    Search on bp6.com for 'Fibre Channel' and you'll come up with an article about a BP6 user that did this months ago. If you want to skip reading the article, go here [tcnj.edu] or here [tcnj.edu]. Then again, they're using ethernet instead of serial, but that's damn close, IMHO.

    Props go to sandin. I've got my qla2100 =)
  • If you look at GigE you'll notice that you can use the exact same Cat 5 cable that works with 100 MBit ethernet. Sure the switches are a little more than 10/100 Switches but they still are much cheaper than Fibre Channel!! GigE card $300 - FC Card $1200, GigE Switch (18 ports) - $7500, 18 port FC Switch >$20,000. It seems to me that FC is the dead technology here.
  • by cyber-vandal ( 148830 ) on Monday February 19, 2001 @05:11AM (#420779) Homepage
    As a fairly long-time Slashdot member (5 digit id), he should really have known that some paranoid/anal geek would figure it out PDQ.
  • So then, Firewire can do 2Gb/s? And those spiffy Firewire switches have been out for a while, too, eh?

    Don't get me wrong, I love my Firewire CDRW. Firewire is cool for consumer level storage and digital video applications. That doesn't change the fact that it doesn't have what it takes for real high performace video and storage. Only a full fabric Fibre Channel solution can really cover that realm right now.

    Oh ... and you might want to check w/Sony on that whole "Apple invented it" comment.
    --
    If your map and the terrain differ,
    trust the terrain.

  • How would you use Fibre Channel with CAT5 cable? I assume that this just reduces the cost of hooking the disks up to one another. My problem is that you still need a Fibre Channel controller card, which run upwards of $1000. So how am I supposed to be saving money? Of course, if anyone knows of a real cheap way to hook up FC drives I'm interested in hearing about it.
  • A Compaq Fibre Array with no reserve!!


    http://cgi.ebay.com/aw-cgi/eBayISAPI.dll?ViewItem& item=1215163380



    Mike
  • by cjsnell ( 5825 ) on Monday February 19, 2001 @05:16AM (#420783) Journal
    Maybe what he meant to say was...

    "I have just founded a company that sells..."

    :-)
  • Comment removed based on user account deletion
  • Fibre Channel is just another name for FireWire (IEEE 1394). Apple invented the technology as FireWire and continues to refer to it by that name. When an industry group (can't remember which) reviewed the technology, they decided "FireWire" sounded too dangerous, so they renamed it "Fibre Channel." Of course Sony calls it "i-Link" for some reason. The technology referred to in this article is just a faster grade of FireWire, as faster grades have been planned ever since the technology was first introduced.
  • You know what? As funny as it sounds at first, you might actually be right. ;-)

    -----
    "People who bite the hand that feeds them usually lick the boot that kicks them"
  • Fibre Channel and FireWire are entirely different things with no relation whatsoever. Apple has nothing to do with Fibre Channel. -B...
  • When native FireWire drives (currently vapor) arrive, I hope you'll reassess. As for now, FireWire isn't the bottleneck. It's the conversion from IDE to FW. -B...
  • FireWire isn't related to Fibre Channel in any way, other than serving some similar needs. Fibre Channel was invented by the large storage companies to be a extremely high speed interface to peripheral devices, without any cost compromise. It is/was intended to replace mainframe "channel" peripheral interfaces. The link technology that Fibre Channel invented was reused by Gigabit Ethernet, with Ethernet framing rather than FC. FireWire was invented by Apple, standardized by IEEE as 1394, and called i.Link by Sony. It was invented as a consumer level, high speed interface with optimizations for audio/video transport. It's much slower than Fibre Channel, and much much less expensive.
  • Yes, the iSCSI protocols from Cisco, IBM and others are going to make a big dent in FC applications.

    It won't dent your network traffic if your core switching is using 10gb Ethernet and QoS protocols. Even a correctly structured 1gb Ethernet switch network would work.

    This ain't 100BaseT anymore.
  • From their web site

    Probably not unless you are creating a very large data center. As nice as hot-swapping sounds in theory, in reality the odds are the operating system will not be able to survive a drive suddenly dying on the bus. FC fairs much better then plain SCSI in this regard, but a drive 'starting to die' will still most likely cause problems. Other potential problems exist as well. The odds of hot-swapable drives completely preventing down time, are not very high.

    What a load of rot!

    Hot swap drives are invaluable. As JBOD's they're not much cop, but if you then use LVM (or similar) to mirror them ; when one goes down you stay up and running because the other half of your mirror is still intact.

    Without hotswap capability, this solution will never see the inside of a datacentre.

    Macka

  • Why is it always "timothy" who posts these brain-dead stories? This fellow is really gullible, or really hasty, one of the two.
  • [disclaimer] I work at the company I'm about to refer to/endorse.[/disclaimer]

    LSI Logic makes FC adapters that support IP over FC in addition to acting as storage device controllers (FCP). There aren't Windows drivers to support IP over FC for the LSI HAB's but our Linux driver does.

  • by Anonymous Coward on Monday February 19, 2001 @03:44AM (#420794)
    Cinonic is the name of the company selling the devices.

    psychosis.com is the email address domain of the submitter.

    www.cinonic.com and www.psychosis.com have the same IP address.

    Whois data for both domains shows the same individual involved with both.

    cinonic [networksolutions.com] and you'll have to type psychosis.com [opensrs.org]

    Suspicious or coincidence?
  • It's a ridiculous policy, if you ask me. Cable providers have the bandwidth to spare... they just want you to buy the @Work or equivalent versions of their product so that you can pay 5 times as much for the same connection...

    But that's a rant for another time ;-)

    BRx
    --
    exposing capitalist plots everywhere ;)
  • excellent information.

    thanks for adding it :)

    all my information came from contacts that were at best third hand.

    hearing it from the horses mouth is the deal :)

    cheers.

    Peter
  • Actually, most FibreChannel implementations these days use copper cabling - the only time you need the fibre physical layer is for a large data centre (discs >10m from servers) or to do hot mirroring to another data center in another building (fibre runs up to several km are possible).

    The main advantage of FC over SCSI are higher bandwidth and the ability to put a lot more spindles on a controller stack to make large RAID systems. Cabling is also simper (4 pins vs 68). At large scale it is more cost effective than SCSI.

    Even mid market external RAID systems these days are moving from FC/SCSI to pure FC/FC.

  • Can someone pleas mod the parent of this comment up?
  • by Ed Avis ( 5917 ) <ed@membled.com> on Monday February 19, 2001 @03:52AM (#420799) Homepage
    With 400 megabytes per second, Mozilla might actually load quite fast!
  • I need a shedload of this stuff, RSN.

    Cinonic just fell right off my supplier list for that stunt. 8-(

  • I know barely enough about Fibre Channel to know that I need some pretty soon, and that it's going to cost me plenty.

    So, how does this gadget work ? Is it possible that it can really do so, or is it just a piece of unreliable wet string that you'll curse forever ? If these people can do it so cheaply, why are the real boxes so expensive ?

    Owing to the IP scam, I'm unlikely to ever buy anything from Spamonic, so this is now just idle (but serious) curiosity.

  • by jnik ( 1733 ) on Monday February 19, 2001 @05:21AM (#420802)
    A couple of fibre thingies that have worked for me:
    Transduction [transduction.com] has good enclosures for pretty cheap--they aren't razor-thin, but they work.
    ICP Vortex [icp-vortex.com] makes RAID cards, including Linux support.
    They're both pretty helpful in the CS department, too, but please don't abuse that--enclosure and card are both in the $2000 range.
  • Indeed I have. What I really meant to say was how many have gone undetected.
  • *** disclaimer : I work for QLogic, a major FC player. these are my opinions only, and do not represent those of QLogic Corp. ***

    Well, to be a bit more honest, you can get QLA2200/33 cards (copper) for around $830 retail, 66MHz PCI version for around $950 retail. For the $1200 you quote, you can get a 2202 in copper (dual FC on one card) or a 2200 optical and still have $100 - $200 left over. Hell, you can get a 2300 for around $1300 (2 Gb/s).

    I'll grant you that the switches tend to cost a bit more than GigE, but then they scale better in a storage application, too.

    Bottom line is that neither GigE, nor FC are "dead tech". Both have a long life ahead of 'em in their respective niches: GigE for high speed packet switched networks, FC for storage and video.

    Again, all IMNSHO.
    --
    If your map and the terrain differ,
    trust the terrain.

  • by Raleel ( 30913 ) on Monday February 19, 2001 @04:27AM (#420805)
    It's really sad that they chose to use this route to show off their cool tech. If the first line had read something more like "I have a small company that makes cat5 to fc adapters, and I figured the slashdot crowd would have been interested", I probably would have bought some of their little product. Now, because of their successful attempt to fool slashdot moderators into posting free advertising and not calling it as such, I can only assume that they will attempt to fool me the consumer on other things. Now I _won't_ buy from this company....
  • If gigabit ethernet is faster than FC-100, is there any hope of doing SAN storage on standard gigabit ethernet? In other words, a SAN protocol layer run over gbit ethernet. The host OS only needs the SAN stack and a gigabit card, with storage device(s) featuring gigabit ports. This seems largely what FC does, albeit with expen$ive HBAs and storage.

    In fact, I'd bet that for a lot of uses there'd be little reason to restrict it to just gigabit. Even some new RAID1 systems utilizing "yesterday's" tech (AMI Megaraid, 7200k Ultra2 drives) can only overcome the kind of speeds you might see over 100Mbit at the extreme end of utilization. There's little reason to believe that everyday systems couldn't manage just fine with a SAN-over-100Mbit Ethernet, especially in a desktop/workstation environment where most cycles are wasted anyway.

    This actually makes me wonder what the transmission technology of FC actually has or does that Gbit ethernet doesn't, besides cost.

    I realize that filesharing stuff like NFS is kind of an abstracted storage-over-ethernet, although it's kind of weak in the sense that there's too much you can't do over NFS and there's way too much overhead. The OS can't "see" it like a raw disk in the same way it might over FC or SCSI.
  • The guy is obviously a fiend... cjsnell said it best here [slashdot.org] =)

    Why buy it when it's easy enough to build if you have fairly decent skills at soldering? The core components you need are:

    QLogic QLA2100 FC card [qlogic.com]
    FC hdd (ST19171FC - 9G [seagate.com] or ST318304FC - 18G [seagate.com]) just two examples
    'T-Cards' (which you can manufacture quite easily)

    More info available from the links in my above post "Looks like they ripped someone off"
  • It is also worth noting that this "business" hosts its web server on a residential cable modem.

    whois 65.33.229.88@arin.net
    [arin.net]
    Road Runner-Southeast (NETBLK-ROADRUNNER-SOUTHEAST)
    13241 Woodland Park Road
    Herndon, VA 20171
    US

    Netname: ROADRUNNER-SOUTHEAST
    Netblock: 65.32.0.0 - 65.35.95.255
    Maintainer: RRSE
    ...
    Name: planw-65-33-229-88.pompano.net
    Address: 65.33.229.88
  • ... and a thousand heads pop as /. readers try to make sense of what was just said. =]
  • Actually, my Yamaha CDRW is SCSI/FW hybrid. At 8x there's no bottle neck.

    My points still hold: (1) FC does 1 Gig and 2 Gig today and (2) FC can (and often is) configured in a switched configuration. AFAIK IEEE1394 isn't a switched archetecture.

    Bottom line: IEEE-1394 is great for consumer external storage, consumer and some pro digital imaging and video, and perhaps digital audio. It doesn't play in the same league as FC, though.
    --
    If your map and the terrain differ,
    trust the terrain.

  • The other issue with Fibre Channel is that its simply SCSI-III encapsulated in Fibre Channel Packets, so there's a lot of overhead.....which makes Fibre not quite the magic bullet that people say. A fibre channel link is only good for maybe 70 MB/sec. because of signaling and overhead. But, on the plus side, You can take your fibre channel loop, stick your servers at the ends, and run IP over Fibre Channel - you just need to make sure your cards support is. I don't think any do under linux, but under Sun most do.

    Thanks,
    Matthew J Zito, CCNA
  • by weasel ( 8923 )

    FireWire was based on SCSI-3, but never had anything to do with FC.
  • but after all that "searching"?
    hmmm...
    doubtful.
  • I've been building an FC interconnect for my home office for several months, all for cheap all off Ebay. This stuff is fast, and a great bang for the buck at Ebay discount prices - $0.10-0.20/dollar. I picked up a T-card from CS Electronics with a CU cable to test drives with ... for one drive this is about the same price at the joker that spam'd us. The only real trick to making this stuff work is the FC drives are "dead" until you jumper a drive start option - the T-Card arrives without any jumpers and doesn't work if you just cable it up without the jumper.

    The problem is that T-Cards cost as much, or a lot more than most of the drives I got for cheap. So I spent a week and a piece to design PCBs for two passive backplanes - a four HH drive backplane, and a six FH drive backplane. Proto PCB's are a tad expensive, but for 10 drives is a lot cheaper than a bunch of T-Cards.

    When I get done pulling fibre in the house (also cheap off ebay) it will be fun resuming some clustered/SAN filesystem research I've left idle for a few years. Fibre Channel may be dead commercially, but at the current dumpping prices is excellent high speed hobbies material. I paid a $100/ea for my 18GB FC drives off Ebay, and a lot less for the 9 & 4GB drives to build out a really fun JBOD array - and a $150-175ea for the HBA's. This isn't much more expensive than high speed SCSI.
  • I was my companies representative to the ANSI standard committee that wrote the Fiber Channel specification at time of it's creation. Fiber Channel wasn't quite created "to excel at the streaming of data," although that was an important consideration. That was the design philosophy of HIPPI (High Performance Parallel Interface), another standard being written by the same committee at that time. The HIPPI explicit expression was create a "firehose for data." A goal HIPPI excelled in from the start (less than 2% of total bandwidth (800Mbs) was consumed by other than data). Fiber Channel was designed to allow very high data rates from dedicated devices. Many control maechanisms were included to allow this. That effort also made Fiber Channel slow to start and switch. Fiber Channel owes much to IBM's ideas of main frame computer I/O channels being very high throughput even if slow to start. In fact, the original proposal came from IBM. One of the things they wanted to get out of making it a standard was cheap commodity disks. Peace
    Marty
  • by Anonymous Coward
    As I recall, SCSI-1, SCSI-2, SCSI-3 are revisions of the protocol, and don't necessarily have [much] to do with maximum trasfer rate.

    These are the speed steps I can remember:
    SCSI = 5Mbit

    Fast SCSI = 10Mbit
    Fast-Wide SCSI = 20Mbit
    Ultra SCSI = 20Mbit
    Ultra-Wide SCSI = 40Mbit
    Ultra2-Wide SCSI = 80Mbit
    Ultra160-Wide = 160Mbit

    although what might be more relavent would be the time between these "quantum-jumps", compared with the improvement rate of fibre-channel technologies.

    Of course, we're just talking about evolutionary improvement of some existing technologies. It's very likely that within the next 10 years, some new technology will come about that will obsolete everything we have now.

    By Obsolete, I include the following conditions:

    • Performance exceeds existing technologies
    • Cost is less than a future evolutionary step of existing technologies of similar performance (cost includes all costs associated with using a given technology -- equipment, media, support)
    Chances are, adopting anything [insert future technology] will necessitate displacing a majority of current investment, even next-gen [Ethernet|SCSI|Fibre-Channel].

    Just try to use an old SCSI-1 (5Mb/sec) tape drive on that newfangled Ultra160 SCSI card: your performance will go to crap. (That crappy tape drive will block access to the SCSI bus until it's done with a transfer from the Host adapter), in the mean time, your superfast hard disks will just be sitting there waiting for the bus.

  • Similar to this guy, I've noticed local vendors selling the 40-pin fiber drives. I have a 40mb/sec raid array going right now and have been looking for a cheap solution to upgrade my array. These drives seemed like a dream come true but digging deeper I found out that, without a very expensive fabric switch, I'd have to settle with arbitrated loop which, if the circuitry of the drives went down (which more often the platters do, but just the off chance that the circuitry could), then any form of raid array would be out of commission. I'd already seen a site that showed how to make the 40-pin cables from athlon overclocking devices, but yours looks like a much cleaner solution. Has anybody had the opportunity to plug your drives into a regular ethernet hub and seeing if they work? I believe hubs work at the data-link layer so it does some packet analysis for CSMA collision detection so I doubt it'd work, but I'm curious if it does.
  • I've been snagging FC off of ebay for awhile now, and there hasn't been much competion. Now, everything will probably end up costing twice as much. Still I can't complain too much, I picked up a 10K rpm 36Gig drive for $180 a few weeks ago. :)

    ---GEEK CODE---
    Ver: 3.12
    GCS/S d- s++: a-- C++++ UBCL+++ P+ L++
    W+++ PS+ Y+ R+ b+++ h+(++) r++ y+
  • you just heard is thousands of /. users starting up new browsers to search for 18gb FC drives for $70. Man, if I got a deal like that, I'd resell 'em to my boss for $400 and we'd both be ecstatic.
  • the gigabit won't die. several companies announced 10GB fiber.
  • Why doesn't this guy post the specs for the little "T" cards he whipped up? Ah well...
  • by bluGill ( 862 ) on Monday February 19, 2001 @03:53AM (#420823)

    just because previously SCSI was always done in parallel cabeling doesn't mean that it has to be done in parellel. The only change in the scsi protocol to go to serial communication is in selecting which drive gets the bus (There is arbitrated loop and fabric, which work different somehow here) and you get to use a lot more devices on the bus if you want.

    fibre channel can run many protocols. ATM, SCSI, and IP come to mind off hand. Just like you can run IPX and IP on the same cable, you can run IP and SCSI on the same cable. SCSI is a well designed protocol. Seperate out the small part relateing to drive selection in a parellel cable and you have an execellent serial protocol that is cheap to design (over starting from scratch)

  • Yeah but no specs on those "T" cards yet... Kinda leaves us hanging, doesn't it?
  • by tenzig_112 ( 213387 ) on Monday February 19, 2001 @03:54AM (#420825) Homepage
    Everyone talks about fibre as the Cadillac solution (because it costs about as much as a caddy per station). But there are a lot more elements to consider other than the drive bay adapters. Just getting the optical cable installed with its various freaky components drive up the cost in a hurry.

    Fibre is also a solution with few big players- and loads of tiny less-stable providers. I don't want to get stuck on the bleeding edge with a company with a crappy web site [cough. Cionic. Cough.]

    [I'm sure cionic is getting slashdotted right now. And from a quick check of network solutions, it seems that the poster has a vested interest in that.]

    don't believe the hype [ridiculopathy.com]

  • Jeez, just goes to show once someone gets free advertising from /. everyone else will try to jump right in.
  • by Anonymous Coward
    I deal with fibre channel storage for a living and getting the drives is indeed possible to do cheaply, the problem always was the box you gotta stick em into...big bucks. And btw, no you don't gotta assign an IP address to the drive, the drive will have its own unique identifier WWN. While it is indeed possible to run IP over fibre channel, most devices run SCSI3 instead. Look into the fibre channel specs, its pretty cool stuff. I'm actually not sure if anyone is encapsulating IP over fibre channel these days. The only problem now is shelling out over a grand for a nice jaycorp/emulex/qlogic HBA :)
  • by Jay Maynard ( 54798 ) on Monday February 19, 2001 @03:54AM (#420828) Homepage
    While it looks like Cinonic has handled the drive end of the connection, this doesn't do anything for the host bus adapter end. Fiber Channel HBAs are still pretty expen$ive, especially if you have to add a copper GBIC to them. There's also the issue of drivers for Linux (hey, this is Slashdot, after all); while there are some fiber channel drivers in the tree, there are more out there. Be careful before you drop a lot of bucks on FC drives and adapters to make sure you can get an HBA that you will be able to use for your system.
    --
  • Nothing like free advertising now, is there?
  • It's your id that counts (3790 in your case - so you've been here a little bit longer than me).
  • You'll also find that Sandin won't post specs. The reason why, is because he is selling home-built t-cards on eBay. However contacting SierraTechnologies, and buying an FCA-3000 "T-Card" from them, and asking for schematics will provide a quick solution for building the same type of conenctors several people are now selling on eBay
  • We're cruising up... What you're looking at here on E-bay is the hottest tech out there at a price Joe Enduser can afford to make his machine pur.

    Frankly, I'm going to be getting on Ebay as fast as I can and hope that Malda doesn't start sucking up these drives for use on the /. servers!

    The problem with capped Karma is it only goes down...

  • He's got a 5-digit slashdot id, how long ago did they run out?
  • i'm going to assume that you were just trolling, but for the masses that are sure to follow you into the depths of ignorance which need an education:
    (and no, i'm not british)

    try here [unh.edu] for some self-help.
  • All the big post-production houses in the world (i.e. the companies who add FX to and edit movies) and companies like SGI [sgi.com] and Discreet [disceet.com] were responsible for pushing SCSI to it's limits. They couldn't make go faster so that's where they left it.

    All these companies use fiber channel disk arrays now. It already is pretty mature and as stable as it needs to be. If it wasn't, they wouldn't be using it.

    ----------------------------
  • 18.2GB Fibre Channel: $95
    http://www.pricewatch.com/1/26/2129-1.htm [pricewatch.com]


    Chas - The one, the only.
    THANK GOD!!!

  • correct Discreet link here: http://www.discreet.com [discreet.com]

    ----------------------------
  • Now you guys can all test GFS [sistina.com] (GLobal Filesystem) it's yet another journaled file system for linux. However this one is different. It's cluster aware. With this FS you can really use all those disks and FC cards you are buying from E-Bay. You can actually mount the same fs as though it's local on all the machines that can see the drives on your FC-AL or Fabric.
  • SCSI is a well designed protocol.

    I know this is drifting a little off-topic, but I just can't help myself. SCSI has some good points, but it also has some pretty severe warts. Things like disconnect/reconnect and tagged command queuing are good - unless you consider them so obvious and necessary that any interface lacking them is brain-dead. Some aspects of SCSI error reporting are good, such as the way that an error reply can specify exactly which bit in a request caused it to be rejected. Very nice.

    Now for some of the warts. The termination and ID-assignment issues in the original SCSI spec drove many people insane. The speed/width negotiations are still having that effect. The handling of resets still leaves much to be desired, particularly in a multi-initiator environment. Similarly, the way sense data are maintained (or not) sucks rocks in a multi-initiator. The lack of AEN support is not really a protocol flaw, but it's annoying enough that I have to mention it anyway. Some of these issues are specific to old-style parallel SCSI, but some others are shared with FC.

    The long and the short of it is that, at a protocol level, SCSI is light-years beyond IDE but still somewhat short of what I'd call a "well designed protocol".

  • i'm assuming that someone would eventually insist that i backed up my statements with some sources.

    the sources i can provide with 5 minutes of research are, sadly, weak, but here goes:

    Brocade [brocadecomm.com], a very highly respected manufacturer of FC switching products, has a discussion about this very topic here [brocadecomm.com].

    also, as someone else already mentioned in another post under this article, a counterpoint as researched by SGI [sgi.com] is here [sgi.com].
    keep in mind that this is still a research project and probably can't be considered ready for prime-time yet, but it shows tremendous promise and validates the counterpoints made almost 10 years ago now quite well.

    whether you agree with these sources or not, the prevailing opinions for years have been both what brocade *AND* sgi state.
    half the camp said "FC is designed for high-bandwidth streaming, ethernet is too laden with baggage", while the other half said "but if we are smart (maybe even tricky) about the way we implement a,b and c, we should be able to make it a moot point."

    so, be your own judge :)

    Peter
  • One important aspect about the performance of FC is such that it is 100MB/s sustatined. That's about to double to 200MB/s when the 2GB FC spec comes out for approval.

    I'm actually looking into creating a *massive* storage area network with a hybrid SAN/NAS architecture, since a "pure" SAN simply cost too much money. Look at the stats yourself:

    1 Single port HBA (card for PC) - $800
    1 16 port *non-blocking* FC switch - $25,000
    1 64 port "director" switch (same RU's as a 6509 approx) - $250,000

    So if you were to wire, say an entire row of 1U servers, then you would need, say 40 * 15 to make the math easy, = 600 servers.

    You would need 10 director switches, and 600 HBA's.... approximately $3M (not including the interconnects). And to ignore the cost of fibre, that's saying something. And yes, I know I can use copper, but not over 15 meters, so fiber is the choice. (Plus, it looks really cool in a datacenter.)

    And that is w/o storage! But let's look at that for a moment. A really cool company called Exadrive (not plugging the company here!) makes a 3RU enclosure that takes 24 ATA disks. At today's density that is 2TB. You double the density of the ATA drives, you get 4TB. Quite cool!

    My problem is that I'm trying to do a 500TB system for about 10,000 machines. A pure SAN is technologically fesable, but not for a massive application.

    I'm actually looking into removing the most of the switches and the HBAs by using SAN over IP. Cisco makes a product (through aquisition, no surprise) that actually takes the FC information, encapsulates it in IP, and ships it over the existing network. Granted, this is cool, but it could potentially hurt the network.

    But if you're a video-creation house with Avid machines, or a massive real-time database, or some other application that warrants a full SAN, go for it. It's definetly worth the cost. But for my application...?
  • by Anonymous Coward
    Fibre channel is only a carrier really, everything has it's own adress (WWN, world wide name) which is more like a MAC adress than an IP.
    So this means that you don't add "stupid" adresses to HDDs :) Unless you run IP over FCP (Fibre Channel Protocol) but then again you probably don't use it to communicate with your storage array.
    The win of Fibre Channel is that you build yourself a storage network and thus can sonsolidate all your storage to one central point.
    If you use Fibre channel for all your storage (from server to storagearray) why not use it all the way to the HDD, otherwise you'll have an extra protocol translation.
  • I cannot find this "dump of 'ew 18GB Barracuda drives for $70 each'"

    Maybe you should have checked the completed auctions.&nbsp Click here [ebay.com].

  • Rather poor arguement, in that you completely ignored the time scale. The big 10x improvements in ethernet, for the most part have just had a much slower release schedule over a 2-3x longer product life. The megabit ethernet at Xerox is nearly 40 years old, FC technology is less than 10, and the first SCSI draft is less than 20.

    So if you take REAL historical scaling (IE a perfomance/date plot) into account, all the technologies share the same performance curve which is dependent upon similar tranciever technology performances.
  • No matter what the question is :-)

    The Infiniband [infinibandta.org] 1.0 standard has been published, we may see the first products available by the end of the year (most likely mid to end 2002 with the tech in PCs by 2004 or so).

    IB is endorsed by every company in a position to promote such technology (IBM/Intel/HP/Sun/Q/Cisco/MSFT/Oracle/...). Thanks to such backers, IB is almost guaranteed to become prevalent in server rooms in such volumes that will lead the technology down the food chain.

    I am betting that IB will deal FC a not so quick, not so painless, death.

    The only technology that can stall IB is TCP/IP-based SANs. However IB has been designed to be almost completely handled by hardware, and even taking TCP offloading engines in the picture there is no way SANs will ever be as efficient as IB. Moreover even if IB were to lose the remote disks war it will still be used for local interconnects as a PCI/PCI-X replacement, or for clusters as a fast message passing interconnect (against Myrinet,...)

    One thing I am looking forward to is Oracle's SQL/Net running directly on IB with no networking stack, no context switches along send/receives. Mmm, talk about fast response times...

    One thing I am wondering though is whether Intel will use IB as their next graphic-card standard post AGP 8x. IMO they would be stupid not too, but IB may be a little late to catch this opportunity.

  • Ok that looks like a really simple circuit. 60 bucks for something like that is insaine. The parts probably cost him all of $10. Doesn't anyone know of the schematic for a t card?
  • Nope, you're wrong. I was a part of the ANSI committee that created Fiber Channel. It came from IBM, not Apple. The project was started, in the late '80s, 1988 ISTR.

    Peace
    Marty

  • I'm using a Qlogic QLA2100 without any problem[...]

    The QLA2100 is indeed one of the ones with a driver in the tree. The Compaq 64 bit/66 MHz host bus adapter is another. There may be others, though I don't recall seeing any. I wasn't saying it was impossible, just that you need to be careful what you get.


    Another thing to note is that the stuff on the referenced page is FC over copper *only*. They do not require a GBIC on their board, which lowers the cost - but also removes the ability to use a fiber connection. Your HBA needs either a dedicated copper interface or else a copper GBIC.
    --

  • Gee...and i submit something about networksolutions selling their database and get rejected. Guess next time I'll turn it into an add and see if it gets past the submission cabal :)
  • by Webmonger ( 24302 ) on Monday February 19, 2001 @03:25AM (#420850) Homepage
    I think not. We have a lot more than 30 metres to worry about in our network. FibreChannel is pretty nifty tech (esp FibreChannel Fabric) but I can't see running optical cable all through the house. We only installed Cat 5 a year ago. . .
  • SCSI is designed for parallel data, fiber isnt parallel.

    IP is at least serial, but its really too high a level for these things isnt it....

    I mean do we have to assign hdd an IP address now, lots of unneeded headers for IP on hdd's....

    I really dont see the need for this hdd interface type, why not keep network machines instead of individual devices ?

    I wouldnt have one on my machine.
  • I never thought about that before: I have a three letter nick. Who want to start the bidding ? -Simon
  • Wow, talk about free advertising..
  • .coms that go out of business (or companies with really stupid management) are often willing to just give you the stuff. Sounds damned crazy, but I know of a person who have got a 20" flat screen monitor from a California energy futures company that just became flush with cash... for FREE!

    Point being that if you ask around, somebody will be handing it out...

    The problem with capped Karma is it only goes down...

  • by zsazsa ( 141679 ) on Monday February 19, 2001 @04:03AM (#420855) Homepage
    Diesel Dave writes: "... After much searching I have just found a company called Cinonic Systems that is making low cost Fibre Channel drive and cable adapters that work with plain old CAT5 ethernet cable! ..."

    Diesel Dave's email address is dave@psychosis.com.

    A WHOIS lookup for cinonic.com: Registrant:
    David Cinege
    100 PerCenta, Notsure Blvd.
    Someplacen, FL 33300
    US

    ... and further down ...
    Administrative Contact:
    Cinege, David dcinege@psychosis.com

    I really hope this is either a coincidence or Dave here is just doing the company a favor by registering a domain and hosting it for them after searching so far and wide for them.

    zsazsa
  • Someone please mod the above up - especially since the original poster is named Diesel Dave (i.e. David Cinege himself, presumably). What a loser.

  • The advantage SCSI has over some new protocol is that it is mature and stable. There is nothing in SCSI-3 that is absolutely locked to the transport medium; in fact the specification has been split into parts dealing with the protocol itself and and the various types of transport available.

    I wouldnt have one on my machine.

    Good, that means that many more drives for the rest of us.

  • The drawback the USB has which SCSI and FireWire (which is based off SCSI) is that USB is a centalized design. All data from a USB device passes through the USB controller on your main system bus effectively limiting your aggregate bandwidth to the bandwidth available on your main system bus. This communication scheme also limits the connectivity of USB devices, they have to pass through a central controller in order to talk to one another. SCSI and FireWire devices all have their own minicontrollers which enables them to act independantly. Since every SCSI device has its own controller data never has to pass through a central hub allowing devices you potentially have higher bandwidth than your main system bus. If anything around right now replaces Fibre channel connection it will most likely be an overclocked version of FireWire. FireWire is for the most part serialized SCSI.
  • I spoke to Sandin concerning this not too long ago. The specs were there, but then he took them down for good reason...I would have done the same thing.

    As for the FCA-3000...I am aware of that component. I went to Sandin's page about 2 minutes after it was posted on bp6.com and memorized it. =)
  • No, SCSI is not wrapped in IP packets in FC-AL. FC can support SCSI or IP but they need not have anything to do with each other.

    Basically, when they say they are doing "SCSI" on FC-AL, all it really means is that the commands, mode pages, and errors have the same format as good-old parallel SCSI. All of the SCSI-2 physical/transport protocol crap (disconnect, reconnect, transfer rate, synchronous, asynchronous) is gone, replaced by the FC-AL physical/transport layers.

    Personally, I wouldn't do these without a backplane. Manually cabling up both loops (FC-AL drives have two, redundant loop interfaces, four cables per drive!) is a pain in the arse.

    CP
  • I have no idea where you got this idea from, but its lack of accuracy is fairly profound. FireWire and Fibre Channel aren't even close to compatible. They use different hardware, different protocols, different strategies; they're designed for different uses, sold to different markets.

    FireWire lives somewhere between USB and Fibre Channel, but is not related to either one. It is designed for media devices, consumer disk storage, etc. It's a useful bus for hot-plugging peripherals; a convenient way to attach scanners, cameras, portable storage, and so on. It can transfer data fast enough to avoid frustrating consumers. It's convenient, resilient, and cheap.

    Fibre channel is a streaming system for RAID applications. IP over fibre channel - at least when I was last working on it - is kind of secondary. It's more of a "you get this for free" ability; you don't run fibre channel to everyone's desktop to provide an Internet connection. Fibre channel is for when you have a couple dozen Silicon Graphics boxes, a half terabyte of Barracudas on a rack with a fabric box or two, and you want to edit video without waiting for file copies. It is for streaming massive quantities of data at high speeds. I don't know if this is still true, but it used to be the case that most PC motherboard buses could not supply data as fast as fibre channel could absorb it. This is heavy duty serious stuff.

    I suggest you not get any more ideas from wherever you found this one.

    -Mars
  • i got two more drives from there just a few hours ago =)

    brings a grand total of 27G for $51 (not including the shipping, which is where you get screwed...$12.50/drive)

    ebay has 'em though:
    here [ebay.com]

    pricewatch has 'em too:
    here [pricewatch.com]

    or, one more place :
    here [supersellers.net]
  • Anyone released physical benchmarks of the cost/performance savings by using this method? I'd really like to see what the FC2-2DB9 Interface Adapter looks like as well. If this does turn out to be a viabile alternative, I'd love to see how Adaptec and the other SCSI manufactures would compete with this. - d
  • Oh my. [ebay.com] I'll take a dozen^H^H^H^H^Hhundred...

    --

Algebraic symbols are used when you do not know what you are talking about. -- Philippe Schnoebelen

Working...