Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Hardware

The Vanishing Desktop 64

BonThomme writes: "/.'s post on Mobility is missing the cool story. The real news is a company called 2cComputing that has licensed SplitBridge and LongView technology from Mobility and Avocent (formerly Cybex + Apex) respectively to create this. Their C-Link product will run at 1.3 Gbps over Cat5, bridging up to 100 meters between the CPU chassis and the Cstation which houses a bunch of USB ports and connections for I/O devices on the user's desktop. Meaning, all the CPU's are co-located for admin via KVM, and are much cheaper to wire together for gEthernet or FibreChannel. Best of all, you don't have to pull cable through the whole building (again)."
This discussion has been archived. No new comments can be posted.

The Vanishing Desktop

Comments Filter:
  • OK, here's a potential application for handling PCI over CStation. Imagine a DVD setup that uses a the main CPU in a room separate from the viewing area---in order to reduce the noise level. The CStation technology could connect this unit to a small box containing a decoder card and a remote control receiver.

    Now that might seem a little inconvenient, having to go to a different room to pop in the DVD, but maybe it's just a sound-dampened closet or something similar.

    Besides, maybe in a couple of years I'll have centralized mass storage for all your multimedia: music, movies, etc. After all, we have that now for MP3s.

  • It's a KVM switch that uses ethernet... As a previous poster pointed out, what is so good about having multiple things to fail? I'd much rather use real PCS...
  • It is not one PCI backplane. It is a PCI extender. Converting PCI into network.
  • you don't have to pull cable through the whole building (again)

    I'm not so sure about that. My PC at work doesn't have a direct connection to the computer room (the logical place you'd put all these PCs). I go through a hub, a bridge, and maybe a switch on the other side. Unless C-Link is layered over Ethernet, it's probably not compatible with your network topology.

    Black Box [blackbox.com] sells a similar device called the ServSwitch Multi. You put a PCI card in the computer that connects to the KVM switch via Cat5 (not sure whether it uses Ethernet). You use a combination of serial, AT, PS/2, and VGA to connect the K, V, and M to the switch. I'm sure the Black Box engineers are working on a USB version.

    I evaluated the ServSwitch Multi, but rejected it because it's wicked expensive. We went with a ServSwitch Matrix, which is simpler and cheaper.

  • I don't know about you, but the goals of IT managers and (l)users seldom are the same.

    Agreed. And a setup like this solves nothing when it comes to sharing resources (read: making PC exchange data between them), and it seems to me incredibly stupid to waste 1Gbps to send interrupts from a PCI card on a Cstation or raw ATAPI data around when you can use it to transfer lots of more meaningful data.

    It doesn't make sense even if we take it from the point of view of an administrator who has to install software/patches locally: if you have a multi-user system on it, just log in to that machine and do what you have to do (telnet, X protocol, ICA, whatever). If you have a single user system on it, he has to go to the Cstation anyway, since keyboard and display "are there".

    I'm missing something?

  • And just how many people will accept sluggish redraws of the screen ?

    If there is anything about a "working" system that generates a tech (internal or external) call it's sluggish response on screen. For most users this is what they can identify... I move the mouse but the pointer stutters along a second later. On top of that the storage and cpu are remote so they have absolutely no indication anymore that one of them is involved or not...guess what they are going to blame.

    So its not going to solve any of your problems, it's only going to create more of the existing ones. Nice! Hey how about Photoshop or 3DSMax on a term like that ????
  • This sounds like a variation on the Thin Client model. Hopefully it runs a little smoother than Citrix Metaframe. The metaframe Installation I use is so unreilable as to be useless. With the money we wasted on staff sitting on their thumbs, I could have bought a round of 1GHz pc's for everyone. The system was dead for a month ?!?! after install due to driver conflicts. $15hr x 160hrs = $2400. That does not include the cost of them stupid terminals or the $ sent to Citrix and M$croshaft.

    Someone needs to Recycle all these crappy TCs into something worth the effort.

  • by Anonymous Coward
    Me: "Yeah, uh, hello?"
    IT Guy: "What's up?"
    Me: "Could you reset my machine again? I locked it up."
    IT Guy: "Again? That's the third time today!"
    Me: "Well, I am writing Win32 code here..."
    IT Guy: "Oh, okay..."
    Me: "Oh, and while you're at it, could you look in the box and tell me what brand of NIC is in there?"
    IT Guy: "Hey, I haven't got all day to work on your machine, pal..."

    Not that I don't trust my IT guy, but I feel more cabable of managing my own system locally, thank you very much.

  • I don't see what's so ground breaking about this. It's just a KVM switch with a port replicator. We use a system like this for our servers now. If this is what I'm gathering then it's just like having a USB hub, mouse, monitor and keyboard cords that are REALLY long. A KVM switch.

    The only "management" of the desktops sounds like its in the form keeping them nice and frosty and UPSed in the server room. Other than than SOS (same ol' $#!+).

  • Let's see... Have the processing power located in a central system, then export the displays out to multiple client machines.

    Hmmm...

    Take away the high speed aspect of this plan, and I'll be damned if this doesn't sound like a typical X-Terminal scenario. Am I missing something here? High speed X interaction isn't that exciting.

    I MUST be missing something...
  • I'm one of those MIS guys, and I support an office that does something like this using the Longview stuff mentioned in the article. PCs are in a rack down the hall, keyboards and LCD displays in work areas.

    The office? A university radio station. They can't have any noise in the broadcast booths.

    I agree though, that putting more of the PC at the interface end is a bit silly. Still, if the radio station wanted a live webcam in the studio or some such thing, this would make for a nice solution.
  • Actually, TCO should go up not down..

    For every PC you'd have to run 2 runs of cat5, one for the C Station, and one for the network.

    Granted, I guess since all/most of the PC chassis would be in one room the network runs would be shorter, but still, why complicate the cabling infrastructure?

  • "And operating systems like Windows NT compound the problem by not being easily remotely accessible." Yeah, NT isn't easily remotely accessible for users who are AUTHORIZED to access it. :^P
  • Or for anyone running Netscape under X
  • Since when does sarcasm get modded down "just because"?
  • by jetson123 ( 13128 ) on Thursday October 05, 2000 @06:24AM (#730189)
    The continued reliance of PC hardware on those outdated technologies for booting and installation then entails thousands of dollars in KVM switches. And operating systems like Windows NT compound the problem by not being easily remotely accessible.

    Also, if you can run the bus over hundreds of feet of CAT 5 networks, that suggests that the bus is probably not running as fast as it should.

    Save some money and get hardware and software that are designed for remote accessibility.

  • Our product does provide a remote reset button for systems that provide MB access to the power/reset logic. Most systems with the feature "sleep on power off" or "hold power button down for 5 secs before powering off" support our remote reset. (Many older systems without this feature can support it as well.) We poll between the card and Cstation on a direct current pulse, so even system lock-ups should be able to be rebooted remotely. Feel free to email me direct with further questions. Thanks in advance, Chandler Hall Chandler@2ccomputing.com
  • Well, we showed our beta product at Networld+Interop and the editors of both Network Computing and Internet week selected this product as a finalist for best of show. We didn't win, but feel very honored and excited that our product received this recognition out of the 300+ submissions for best of show. It is a working product in beta deployment now, so you should be able to judge for yourself in the next few months. We'll be at ITExpo later this month as well as Comdex in November. Look us up! Thanks, Chandler Chandler@2ccomputing.com
  • Two more... ...dramatically reducing the cost a recabling the entire building for fChannel, and the ease of switching someone over to a backup machine when their primary fails.
  • Actually, the "other" attempts that have met with limited success are more like the "return to mainframe days". Unlike Thin Client devices, NetPCs, NCs, XWindows, or JavaTerms, our product doesn't take away the end-users' PC, nor does it put all users back on one single server like the 3270/IBM 360 days (or VT100's on a VAX/PDP). Yes, there continue to be attempts at solving IT's big problems with "look it's not a PC" and they continue to fall short of expectations. Bottom line, the meat of the industry R&D still targets the PC platform. We're providing a product that embraces the past 20 years of this movement, BUT provides as many of the "mainframe" or "server-centric computing" benefits as possible without sacrificing compatibility or the end-users' experience/peripherals. (And yes, it uses one single CAT5 cable, NOT 2.) Feel free to email direct with comments or questions, Chandler Chandler@2ccomputing.com
  • I would consider them a close competitor because of the similar benefits derived from centralizing all the PCs in an organization. However, we support the complete range of user-interface peripherals such as printers, scanners, floppies, cdrom, cdr/w, PDA cradles, zip drives, etc. I understand they only provide keyboard, video, mouse... similar to analog KVM extension. Also, they require the purchase of both the end-user appliance and the PC. We can work with existing PCs. Feel free to email directly with further questions. Chandler Chandler@2ccomputing.com
  • Well, the majority of businesses in the world do have CAT5 already installed. This really isn't a technology for solving CAT5 wiring issues, but it can provide dramatic cost reductions for deploying fiber to the PC. (Think SAN-based farms for every C: drive on the net. Better utilization of storage costs, RAID protected and centralized backups... trade-off is it requires a very fast dedicated pipe to each PC... generally means run fiber throughout your building INSTEAD of the CAT5 already installed in most sites.) IF you have CAT5 installed already, our product allows you to centralize the chassis within a few feet of your fibre-channel switch. Now deploying fiber to every PC involves only the NIC and a short fiber patch cable. No need to run 100's of feet to every cubicle and office, thus eliminating the costly labor as well as dramatically reducing the overall fiber lengths required. While most users don't "NEED" a high-powered PC, most users don't accept anything less. Have you used a thin client winterm (256 color, no multimedia/video, no internet messaging, slow internet browsing experience, etc.)? NCs, NetPCs, Thin Clients, etc. have serious user resistance due to limited compatibility and training issues. While all these models offer unique benefits, we feel ours provide many of the same ones AND still uses a PC. Feel free to comment directly via email, Chandler Chandler@2ccomputing.com
  • Granted, our product doesn't change the fact that it is still a PC. IF your facility can choose to do away with PCs throughout your organization and centralize on NCs, Java, Xwindows, etc., then you don't need this. However, most of these platforms have very limited acceptance compared to PCs. Even Thin Clients for Windows (sales doubling every year) still total less than 1% of the PC sales for last year. I agree that it is silly to waste 1Gps... which is what every CAT5 cable out there is doing when it's running only 100Mbps. :-) Seriously, the CAT5 cable in the cubicle is dedicated to that PC and it's used to run 100Mbps. Why not run our C-Link protocol over that same wire, move the PC to a central location and now run fibre with a 6 foot drop cable over to the co-located fibre-channel switch? NOW, you can consider centralizing those PC resources to a single or few SAN farms. RAID protected, centralized backups, and it eliminates the loss of data and downtime due to hard drive failure in the PC. (Replace with similar box, data & apps already on the "C:" drive on the SAN farm... configure, boot and go.) Only requires a fat pipe, which we help reduce that cost... Lots of IT shops asking for SAN to the desktop PC. BTW, administration doesn't have to go to the end-user, it can be done local to the chassis with another Cstation or KVM switches or remote via software such as PCanywhere. However, the USER can do other work, now that distruption in the office is eliminated when maintanence or upgrades are required.
  • We are working with Gartner to analize the cost savings per PC due to reductions in TCO, asset protection, centralized maintenance, elimination of cubicle-to-cubicle administration (& user disruption), etc. We are comfortable stating the it will pay for itself in the first year of deployment in mid-to-large corporations. Feel free to email directly with further questions. Chandler Chandler@2ccomputing.com
  • TCO reductions are in various areas such as reducing the expense of going to every cubicle for upgrades or maintenance. Also, in protecting the assets (no 64MB of RAM getting exchanged with a home PC stick of 16MB or exchanging slower speed CPUs, etc.) and preventing unauthorized end-user intrusion or s/w installations. (You don't have to provide them a CD if you don't choose to.)It also eliminates the cost of end-user disruption (time not spent doing other work due to maintenance being performed in their cubicle) and reduces downtime due to easier access (PCs on pull-out shelves instead of crammed under desk and less cabling to remove... VGA, printer, serial, ps2, etc. is at the desk instead of the chassis). Our product is most useful to customers that already have CAT5 installed in their building (estmates range from 90%-95%). However, if you're going to run Fiber to every PC, then we do feel this simplifies the infrastructure, not complicates it. Now it consists of short drop cables and existing CAT5, instead of re-wiring the whole building. Feel free to email directly, Chandler Chandler@2ccomputing.com
  • You're right on with that example. Of course, a DVD player with a USB attachment can be co-located right by the Cstation, so there's no need to go to a different room to pop in the DVD. Feel free to email me directly with additional comments. Chandler Chandler@2ccomputing.com
  • Yes, before 2C I worked closely with Citrix since 1996... before they made everyone's radar. :-) Did you know, however, that 85% of thin client licenses sold by Citrix (according to them and Gartner Group) are used on PCs? IE, the "device" of choice for running "Server-Centric Computing" software from Microsoft and Citrix continues to be a PC, NOT a thin client device such as a Wyse Winterminal. That's why Citrix changed the name of their model from Thin Client software to Server Centric Computing. IE, they don't want the benefits of their model (which are great) confused with "it requires a thin client device". They sell the benefit of heterogeneous, remote-access-at-very-low-speeds, single application hosting or publishing... whatever the device. I personally believe many corporations can benefit from Server-centric computing... BUT as the market has shown, a PC is still the preferred seat for accessing that mission-critical POS application via Citrix or MSTS. Our product provides many similar benefits of the thin client device without taking away the PC compatibility, functionality, peripherals, and distributed "standalone" capability. I believe it's the perfect companion for Citrix, as I've used three generations of thin client devices so far and I wouldn't be happy with being forced to use one. (No internet messaging, limited color, no standard multimedia/audio/video support, poor browsing experience, etc.)
  • So far, we really haven't seen an issue that "confuses" the admin as to the problem. Either our product works (IE, the system boots) or it doesn't. After that, configuration issues are the same as a non-modified PC. The applications range from hostile environments, corporate IT shops disappointed with the limitations of thin client devices (or resistance from end-users), clean-room (fabs) or quiet-room (NLE post-production) environments, maintenance-free zones (kiosks, video slot machines, atms), or asset-protection-required sites (hotel rooms, corporations where RAM goes walk-about, etc.). It's a solution that both IT managers AND their end-users can appreciate. NCs, Thin Clients, NetPCs, JavaTerminals, etc. don't provide a real PC. There are 100+ million PC users out there that won't accept having their PC taken away from them. We just move the chassis, providing many of the same benefits as these other devices without taking away the personal computing experience, compatibility, peripherals and features from the end-users. Feel free to email me directly with further comments. Chandler Chandler@2ccomputing.com
  • Well, what you're missing I guess is that an Xterminal isn't a native 100% compatible PC that can support the applications and h/w for that platform. I've used Xterms, WinTerms, JavaTerms, Ascii Terms (VT100 :-) and 3270 terms. I've also used their s/w emulations on PCs (yuck, most of the time). I could use them, but the X,000's of the typical OA users in my previous company couldn't... or wouldn't. IE, they are trained to use basic OA applications that are Windows-based. The industry continues to attempt deployment of non-windows solutions for these types of users with extremely limited success. On paper they look fine, in reality there is serious user-resistance, costs in retraining, development and on-going maintanence of custom software, etc. (All well-documented issues for going "non-standard" over time.) Our product is essentially just another PC... with a large distance between the user and the chassis. However, we do provide many "centralized" benefits while maintaining 100% compatiblity with the distributed PC paradigm of the past 15-20 years. Feel free to email comments or questions directly, Chandler@2ccomputing.com
  • It's a 1-to-1 relationship between a Cstation and the PC chassis. IE, for every Cstation there must be a PC. Think of it as a peripheral for a PC, just like a NIC card or something. Performance is no different to most users, though some high-end AGP graphics-intensive users might notice a slower FPS. Video and multimedia runs at full speed and I personally know that Quake III appears no different. :-) So, there is one user on each PCI backplane, just like a regular PC. We are extending the bus over a single CAT5 cable using a digital transmission protocol we call C-Link to allow centralization of corporate/federal PC resources. Feel free to email questions or comments directly, Chandler Chandler@2ccomputing.com
  • You've essentially created a cheap version of analog KVM extension. Our technology is a digital extension of the PCI bus, not just the analog video,keyboard & mouse commands. The difference? Now, you can have peripherals at full speed right next to you. IE, you don't have to go to the closet to put in a new CD or floppy. Feel free to comment directly to: Chandler@2ccomputing.com
  • With USB, you can have the floppy right next to you. So you don't have to call the IT guy for that or CD or scanner, etc. We also have a "non-OS" reset button so the vast majority of system lock-ups don't require an admin's help either. In our target market, the admins will take the occasional "reboot" responsibility over being responsible for fixing the box after an end-user has gone inside it, changed it, loaded unauthorized s/w, stolen some RAM, spilt some coffee inside and then called for a replacement PC. :-) So sorry, you'll have to get back to work... can't blame IT for this delay. :-)
  • an NC connected to an application terminal server mainframe made out of a cluster of inexpensive PC's.
  • Could someone explain the benefit of "stretching" a pc to make the technology now be in two places instead of one? So now a MIS guy has to figure out if the desktop has a problem or the cpu in the closet? Any how many organizations have closets big enough to keep equipment for all of their employees instead of using desk space?

    Complete machines and or Thin client and servers are still hard to beat.

    (the dog ate my cute, witty tag line)
  • No, no, that's not meant to be a bad pun. It's the marketing droid wording in the linked document. Maybe i'm just tired, but it's almost all Greek to me. What are they saying exactly, in plain English? It sounds vaguely as if special servers are communicating at very high speed over "cat-5" cabling, using special adapter cards, and are meant to work with semi-dumb terminals hooked up to them over short distances for userland tasks.

    Is that wrong? It doubtless is, but geez. Why can't these people ever just be plain and simple? Is it so hard?

  • and changes.. not really that ingenious of an idea, but it's interesting to see some of the cycles of computing:
    • large mainframe, bunch of little terms
    • users have their own PCs
    • back to simple terminal running on remote server. see the fairly recent Sun Ray [sun.com]
    • now the 2C Computing hybrid
  • And this would bring my dream of a wired world closer to reality. When the meat of the computer can be centrally located, we gain the ability to provide access to files from anywhere on the network. This also allows content to be distributed by better means.

    Such as this: You have a computer on your desk, but you can't haul it around. Rather than loading the data on to a laptop, you could just go to where you planned on going and login there.

    Some will argue about the lack of security involved with this. 'How easy it would be for the police to snoop through one's files!' or 'What, then, of annonymity?' And to this, I do not know.

    As I see this tech, it still means you need a 1 to 1 correspondance between the front and back ends. This means it isn't the technology you would want to use for a cheap urbanic network.

    Hell. Anyways, just ignore the ramblings, for I am one of those romantic technologists who believed that revolution would be brought with computing technology.
  • thanks to cheap PS/2 extensions cables and a pair of 8 meter 13w3 video cables from UltraSpec. Did it mainly for space reasons in my cramped office, but it also keeps down the heat and noise from my Octane and Indigo2. Each has a ~750 watt power supply (ever seen the huge thick jumper cable connecting the GIO-64 backplane in the Indigo2 Impact to the power supply, scary!).
  • So, according to the press release, they're saying you can lower cost of ownership by putting all the gubbins for your PCs in the computer room and just have I/O on the desk. Have they heard of CITRIX??
  • Talk about a system for the big, slow and obtuse IT department.

    Problem 1: Instead of finding a more efficient way to provide services to the user and at the same time manage their desktop, they do this. What's wrong with thin X servers? Better yet, how about throwing all those resources at creating businessware for Plan9?

    Problem 2: The mainframe guys fell out of favor because of their iron grip on IT resources. They coudn't adapt or change direction fast enough to let smaller depts tailor systems to their business needs. (Not to mention the huge overhead costs they get hit with -- "hey let's make the IT dept a profit center!") Now the LAN guys are heading in the same direction. I wonder what the guys who actually bring in the money will use to circumvent them. Pilots and WinCE maybe?

    Problem 3: This is supposed to make PC's more reliable how? We already have buggy software and buggy systems. Now we're going to transmit those internal signals over a wire. Big iron has scads more reliability than a PC ever will. Mainframes have processors just to watch the processors. This Mobility system is just a way to build a poor man's mainframe with out any of the reliability. I'm not pushing mainframes, but if you're going to build an empire, you might want to do it right and not half-ass.

    This rant applies to WinFrame/MetaFrame/Terminal Services too.

  • WHat i get from the article is that this is a bit like putting a really long cable on your keyboard, mouse and monitor, and then hiding your pc away in a cupboard with a hole in the door for the cables...

    granted this thing has a floppy port on, has only one cable and has a 'protocol' to send all the signals down to the PC, but is this really all that great? i can put my pc under my desk even easier, and have my desktop free...

  • I agree with this. It also doesn't remove the problem of the wire based network (which, incidentally, is preventing thousands of square feet of UK office space being used because of the difficulty of laying Cat5 in them).

    IMHO any company considering this system would be more advised to go for a thin client XWindows system - it's more esablished, more stable and easier to implement.

    Anyway, with the majority of office users simply using office software, I can see a day when we return to a central server/local terminal system. Most users don't need a high powered PC for their work, and could share with others across the company. A wireless version of this would result in a very simplified system for IT managers.

    Of course, there are problems with this idea (a single point of failure, for example), but i'm just thinking out loud ....
  • by scott@b ( 124781 ) on Thursday October 05, 2000 @03:59AM (#730216)
    This looks like yet another "serial backplane" technology. It's not just the slow I/O on long cables, or an X-server (which still takes smarts at the user interface end). Chop the buss into two sections. One is CPU, RAM, and dsik I/O; the other is the PCI slots including the normal slow I/O (serial, parallel, USB) and the video card.

    Now stick fast parallelserial converters on the chopped ends of the buss, run the serial throug LVDM drivers. In the case of C-Link they may be doing a multi-level modulation scheme to get several bits into every symbol (bits vs. bauds, right?)

    So the PC with its disks and RAM sits in a locked up, air conditioned room where the cleaning crew can't bang into it, and the just-fired employee can't give it a swift kick.

    On the desktop is the other end of the buss, a box with PCI slots and the standard PCI-interfacing I/O chips for slow I/O. No smarts, just the serialparallel converts and a PCI interface plus whatever cards you stick into the local backplane.

    Do a little math : take the width of the PCI buss in total signals - data+address+some handshaking - and divide that into the 1.3 Gbps of the serial interface. That's the distant PCI buss speed in buss cycle per second. Now, the CPU, RAM, and disk are all on the standard full tilt buss so they run fast; keyboards and mice and serial ports aren't going to notice the reduced buss speed, it's just the video that might suffer.

  • by Andy_R ( 114137 ) on Thursday October 05, 2000 @01:36AM (#730217) Homepage Journal
    My computer's just asked for a different floppy, I'll be back in about 10 minutes...
  • Doesn't Clearcube [clearcube.com] already have a product similar to this?
  • I'll probably get modded down for saying this, but what gives with all of the vocal gripe posts that surround various bits of new tech? This tech has a use model seemingly identical to existing KVMs, with some added features: some people/installations need this for a variety of reasons. Given that there exists a market for KVM extenders, maybe the gripers should get informed about the intended market instead of whining "I don't need this! It sucks!" or "No one needs this! It sucks!"

    [Ob. Actual Content:] Here's a couple of examples of this tech's use:

    1. Security Physical system security needs may make it desirable to not have the CPU case located with the user station.
    2. Noise Disk and fan noise suck, especially for professional/project music studios, or even for general working environments.
    Note that X/M$ Terminal Server/etc. aren't applicable for the uses of a KVM! All such solutions require a system (even if only a thin client) at the user location -- and none work well (or at all) in a multi-platform environment.

  • by theinfobox ( 188897 ) on Thursday October 05, 2000 @04:00AM (#730220) Homepage Journal
    We use the Cybex Extenders which is basically the same thing as this. The advantage for use is that it clears up some needed room for us. We have an office in which there are 5 people working together(stock market traders). Becuase of their type of work, they have 3 PCs each. As you can imagine that would make their workspace quite crowded. It also generates a lot of heat. We use the Extenders to loacate their PCs in a nearby closet. This provides better security and the closet has a seprate A/C duct. It is also a lot quieter. Now, this only works because these users don't need floppies, CD-ROMs, etc. They have been reliable so far. Just plug them in and forget them.

    Will it solve everyone's problems... No. Not IT product ever does. But this is useful in certain situations. I even took a set home. I used to be jealous of the quiet iMacs. Well now, I have the ultimate quiet computer. I put my PC in the garage, and used the Extender to connect my bedroom. Now my wife doesn't care if the PC is left on all night.

    Another problem though is the cost. The last time I checked, the Extenders were about $400. I wouldn't buy them myself at that price... but spending company money, I didn't mind. :)

  • Here we go again.

    Centralized. Distributed. Centralized. Distributed. Centralized.

    Mainframes. PCs. Thin clients. Fat clients. Whatever they call this new one.

    You get the picture.

    Tim Somero

  • Three reasons: noise, dust and heat. I don't think this needs explaining.
  • Seriously, in a library, you could have the monitor, keyboard, mouse, and then have it all routed to a horde of adequately networked SMP machines by this thing.

    IMHO, this thing has no place in standalone systems (unless you want to run up to the attic and code away while still connected to your computer). I like having my monstrous full-tower right beside the monitor, opened up in all its glory.

  • (I don't think the idea is that great. However, when I think of the number of drives I've seen damaged by people kicking the box the drive is mounted in ...)

    The effective bus speed for write to video is still 10M bus-cycles/sec (80 Mbytes) and might be faster depending on how smart the parallelserial conversions are. That's not too slow so long as you're not doing a full video RAM rewrite. Remenber that the video card itself is at the desktop, so stuff done in its memory is full speed, as is the video output to the display. Moving the mouse takes very little bandwidth and shouldn't be impacted.

    Note that this is different that X-servers, in that you're not jumping between client and server to get things done. The CPU just writes into the video card, wit the limitation of the 80 Mbyte/sec or so bandwidth. The same video drivers work without and with the serial link between the CPU and the display card.

    I got the impression that the link is per machine-desktop, not shared as standard (Etherent) networks are. If so you've got full bandwidth to your desktop rather than the slice of a network you get. And, replying to those who said (more or less) "great, you'd need another set of cables" - with this you'd not have network connections to your desktop, those stay back in the sealed off room with the CPU, RAM, and disks. The best think about this product is that it might help fight shared main/video memory, as grabbing pixels from RAM to stick on the screen would place a large load on the serial interface.

  • Sound, Video, Other Joe Shmoe Lovelies.

    The management is really where Joe Shmoe CPU and Storage is.

    And the idea of revolutionary blah blah blah - reinventing the terminal is not bad. Terminals used to have printers, tablettes, digital input and lots of other stuff. Take a look at an old Textronics catalogue for examples.

    So all the stuff is well known. Just the speeds are new.
  • This person is NOT testing our product. I work for 2C Computing and can definitively state he is "out to lunch"... or maybe "out 2 C". :-) Seriously, just found out we were mentioned here and haven't had time to post any followups to other questions. Would anyone want to hear more? Thanks in advance, Chandler@2ccomputing.com
  • I think what you are missing is that this is not a milti-user system, nor is it a software based solution such as X-Windows. It is, simply put, an extension of the PCI bus. The magic is getting all that data down CAT-5 cable to your office and reproducing the bus there.
  • This would truly rock,
    play Quake III on the toilet ;)
  • by billybob2001 ( 234675 ) on Thursday October 05, 2000 @12:14AM (#730229)
    Please (for Windows users) provide a remote reset button.
  • by Anonymous Coward
    Realistically, how many joe-schmoe users can you stick on one PCI backplane?

    ANyone seen number or a demo??
  • S'funny, after seeing they way some people play online, I'd swear they already were....
    --

    Vote Homer Simpson for President!

  • by Anonymous Coward
    I like how the press release mentions that number, but doesn't say anything about what it means in the real world. What's the practicality of this? How much of a lag goes on? What about video i/o?
  • Yes, but can it deliver a cost per seat advantage over going with standard PCs? If it can't it will die quickly.

  • Did I get it right that they have PCI on the CStation ? I guess I am missing something on that part...I can see USB, par, ser, and 1394, but why PCI ? wouldn't this complicate matters in actuality. Supporting both a remote system and the client hardware that can each have PCI devices. Sorta defeats the purpose in my mind...like I said maybe I missed something or it's so late I can't see the application.
  • I can't see this helping IT managers as much as implementing thin clients. Everytime I had to troubleshoot a problem, I'd have to first determine if it were local or central. Even semi-thin clients will give an error that's easy to determine whether the problem is server-side or not.

    On the other hand, maybe it would just require a paradigm shift...

  • I don't know about you, but the goals of IT managers and (l)users seldom are the same. This is just a press release for a company that has discovered X windows, or a M$ equivalent, and is using existing cat 5 cable at high speeds.

    It harkens back to the days of putting all the mainframes in a single room, and allowing the lusers access to only terminals.

    And I'm wondering if they are doing 1Gbps over a single 4 wire cat5 installation, or does this require a pair of cat5 cables to achieve 1Gbps, which is what all the other GigE implementations use?

    the AC
  • You'll have to just take a smoke break. New security protocol only alows IT to touch the mainfr^H^H^H^H^H PC's in the closet. If central resources does not have your floppy, I'm afraid that application is not supported. Thank You.

Don't tell me how hard you work. Tell me how much you get done. -- James J. Ling

Working...