Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Hardware

Why The Dinosaurs Won't Die 580

DaveAtFraud writes "Ace's Hardware has a nice introductory article to the animal that will not die: The Mainframe. Ever wonder why these things are still around and what makes them different from a PC or UNIX box? The article is IBM-centric so there's no discussion of say the CDC Cyber series but when most people don't even believe that mainframes exist anymore, what the hay, let's disabuse them of that notion first. Hopefully, the author will follow up with the additional promised articles that go into more technical detail but this is a good place to start. I wonder if they still make card readers, too?" This guide came out last month, but it's worth looking through, even just for the pictures.
This discussion has been archived. No new comments can be posted.

Why The Dinosaurs Won't Die

Comments Filter:
  • I wonder if there is a Moore's law equivalent for computer shrinkage rates. Would miniaturization be a good measure for progress?
  • Pft, overanalysis (Score:5, Insightful)

    by Skyshadow ( 508 ) on Wednesday December 04, 2002 @02:45AM (#4808329) Homepage
    This is an easy one: Mainframes are still around because they house working, stable and extremely mission-critical apps for very large and established corporations.

    Nobody in their right mind is going to mess with them until they absolutely can't get strung along anymore, because they know that crashing, say, a HMO's appointment handling system would be what we call a "career limiting" move.

    If it ain't broke, don't fix it. If it ain't broke and it's mission critical to the tune of millions of dollars an hour, avoid it like someone carrying the plague, ebola, leprosy, herpes and a bad hangnail.

    • by ackthpt ( 218170 )
      Nobody in their right mind is going to mess with them until they absolutely can't get strung along anymore, because they know that crashing, say, a HMO's appointment handling system would be what we call a "career limiting" move.

      Ah, yeah, we sane people can say that. But, haven't you ever met the kind of department head who comes in and says, "No Sir, I don't like it", changes everything and then departs before the fecal matter hits the impeller? Stuff happens. Even when conventional wisdom screams, "No, you fool, leave well enough alone", because change, goals and accomplishments are what advancement minded people look to as opportunity, and usually its the peons who get blamed for when it doesn't work, not the guy who broke it.

      • by Fugly ( 118668 )
        But, haven't you ever met the kind of department head who comes in and says, "No Sir, I don't like it", changes everything and then departs before the fecal matter hits the impeller? Stuff happens.

        If the company is big enough to use a mainframe, this just isn't going to happen. They're going to need a hugely compelling reason to switch off of the mainframe because it's almost assuredly going to be a multi million dollar project spaning months or years. Even CTO's generally have to have a project like that approved by a bunch of different people. How's he going to sell it?
    • working, stable and extremely mission-critical apps for very large and established corporations...That are oftimes bloody expensive to have any downtown in, let alone enough to verify a changover to a new system.

      Locally, one of the larger local businesses has an old beastie made of wires and PCB that can not be shut down? Reason, to turn off the apparatus it's connected to would require a lot of work to get it warmed up again, and having that particular apparatus off would probably mean shutting the entire plant for a certain period of time...

      a.k.a, not something you want to mess with unless you've tested, and tested, and tested, and scenarioed, and prayed a few times before frantically moving things over to whatever the new configuration is.

      And in such times, isn't it murphey's law that you end up with an event like "what do you mean you forgot the power cable at the office?!!!" just before/when going live.
    • Re:Pft, overanalysis (Score:3, Interesting)

      by pVoid ( 607584 )
      If it ain't broke, don't fix it

      Your idea is right, but your conclusion isn't. A mainframe is stable as hell... The application (a HMO's appointment handling system)running on it, will crash and burn like any other application.

      So it doesn't answer really why mainframes are still around.

      If anything could be said in detriment to mainframes, it could only be at the hardware level (like hotswapping CPUs, and IO devices), but Sun machines can already do that sort of stuff...

      • by znu ( 31198 ) <znu.public@gmail.com> on Wednesday December 04, 2002 @06:09AM (#4808903)
        No, application reliability is definitely a very important part of the mainframe's continued success. It's true, obviously, that the hardware can't make the software work correctly. But many of the applications being used on mainframes have been around for decades -- they're known to be reliable. As the article points out, vendors go to huge lengths to maintain backwards compatibility. So a business looking to replace an aging mainframe basically has two options: port or rewrite its software for another platform (possibly introducing a lot of bugs), or simply swap an old mainframe out for a new one that's essentially guaranteed to be perfectly compatible software that's proven its reliability over the course of many years.
      • Re:Pft, overanalysis (Score:3, Informative)

        by sql*kitten ( 1359 )
        If anything could be said in detriment to mainframes, it could only be at the hardware level (like hotswapping CPUs, and IO devices), but Sun machines can already do that sort of stuff...

        You say already like that's a good thing, but IBM had the capability decades ago and Sun are only really just catching up.
      • Applications written in COBOL are stable, because they are so constrained.

        My limited understanding of COBOL is that there is no dynamic memory allocation. Try writing a C program without any dynamic memory allocation. It may be hard, but I'll bet it didn't crash.

        Joe
      • Re:Pft, overanalysis (Score:4, Interesting)

        by frank_adrian314159 ( 469671 ) on Wednesday December 04, 2002 @02:15PM (#4811607) Homepage
        The application ... will crash and burn like any other application.

        People who write mission critical apps for mainframes program differently. They wear both belts and suspenders in their code. They do precise error condition tracking and recording and when the app does crash, they make sure that the data was not corrupted so it can restart. They test for months (hell, years) before putting new versions into production. They basically program as if reliability is their number 1 priority - because it is - forsaking speed, code cleverness, memory space, anything that would get in the way of targeting less than 30 s. of downtime per year or better. Oh yeah, it makes development slower, too. That's the hardest thing about developing reliable software - the pace is different. Shipping tomorrow, but sacrificing reliability, will kill you in this market. A lot of PC folk don't understand that. Software written for these environments are built like tanks. They may not be pretty and they may not get you there as fast, but the will get you there come Hell or high water. And that's why people still use these systems - not hardware, not software - but combined systems of the two.

  • by User 956 ( 568564 ) on Wednesday December 04, 2002 @02:46AM (#4808339) Homepage
    But there is an interesting possibility arriving from the PC world: clusters. PC hardware is so staggeringly cheap that it's becomming viable to run enterprise applications accross clusters of PC's, viewing each PC as un-reliable and likely to fail.

    Take google for example. Their software flags failed units, brings them offline, and once a week they go pull them out of the racks and replace them. I believe google builds their own, but for less agressive businesses you could just buy enough dell to tolerate as many failures as needed, boxing them up and shipping them back to dell when they go south. Heck, dell will likely send you back an upgraded unit anyhow, so you get a rolling upgrade :P.

    Just like the network guys learned the lesson of ensuring end to end reliability accross an unreliable network using TCP/IP, some companies are realizing that reliable computing can be enabled by clusters of pc's. It's a shame the free software/open source crowd hasn't rallied around this more... supporting this at the OS level could prove very powerful.

    For a good example of what I mean, compare Traakan's SAN systems to more traditional products, like from EMC.
    • by Skyshadow ( 508 ) on Wednesday December 04, 2002 @02:52AM (#4808368) Homepage
      I think you're missing the point in regards to most mainframe software.

      In my experience, this stuff hasn't changes significantly in years -- it's tweaked now and then, but it basically works and as such isn't messed with.

      What you have to remember is that entities who are still using mainframes are both (a) very large and (b) very well established. The mainframes tend to be involved with really important tasks that are mission critical (and I mean "mission critical" in a very real sense, not in the 1999 out-webserver-is-down way), like flight reservation systems or bank account tracking systems.

      What I'm trying to say is that it's a really bad idea to mess with these systems unless you really have to -- anyone with a couple years at a suitably large company could tell you that there's nothing to be gained and everything to be lost by messing with them. The hardware and support costs are laughible if you compare them with what just a few minutes of downtime from buggy new software would cause.

    • by wik ( 10258 ) on Wednesday December 04, 2002 @03:05AM (#4808417) Homepage Journal
      Google is unique in that it doesn't really matter whether the latest data is in its cache or not, or even if they lose it or not. It could take a hit, even lose all of the data that it crawled through yesterday, yet still have an operational site (I know I wouldn't be able to tell the difference, could you?)

      You don't want your bank using the same unreliable hardware. Do you want to wait a week while the maintenance guy comes along to replace the failed node that held the records of your last deposit?

      Mainframes are built for customers who simply can't take downtime or data loss. Some businesses can, many can't. If you build a bank off this idea, let me know. I'll be sure to stay away.
    • I think it would make sense for a new company setting up now to do that. However, most of the companies that use these mainframes are very old and established, so switching over to a PC cluster system would have few if any benefits (most of the benefit was cost, but if you've already got a mainframe setup then most of that saving goes out the window) and a whole shitload of risk while switching (for instance a bank, a failure of their system would surely result in fire, brimstone, death and such).
    • by anonymous cupboard ( 446159 ) on Wednesday December 04, 2002 @03:36AM (#4808520)
      When you are a bank, an exchange or something similar you want to uniquely sequence every transaction. Why, well if you sold A and used the proceeds to buy B and then sell B to buy C and one of those transactions fails, you need to unroll the following transactions.

      So you need to tag every transaction with a unique sequence number. This is really, really difficult when you don't have a single system with an amazing I/O throughput to assign those numbers.

      A Google type solution uses a lot of execution units each with limited I/O capability. Queries may be parallelised without much interaction. In my example, every transaction must be synchronised. It doesn't matter if the application is spread over a cluster, the nodes must still coordinate to assign the sequnce number.

      I agree though with your point about adding better cluster management though to open source operating systems. However, this is much more difficult than improvements to a standalone system because how many people can afford to run a cluster of say 4 or more systems for playing around.

      • by fireboy1919 ( 257783 ) <rustyp AT freeshell DOT org> on Wednesday December 04, 2002 @04:08AM (#4808610) Homepage Journal
        They must coordinate? Completely?
        Are you sure?

        I seem to be thinking of an identification technique involving numbers. IIRC, it was highly distributed. Each client in the system was given a 32 bit numerical representation which was used as an "address" to communicated with the other clients. These "addresses" could be assigned dynamically by various agents who were authorized to destribute a subset and report which client had which address.

        The whole layout was mainly hierarchical, and completely unsynchronized.

        In case you haven't caught on yet, I'm talking about the IP protocol. Its a demonstration that handing out numbers can easily be done in a distributed way.

        Of course some transactions need to be sequential, like the ones you mentioned. That's why we have semaphores, and why individual records aren't usually distributed! This is basic database design, and there are plenty of good ways of doing it which DON'T require a huge amount of I/O.

        Theres a good bit of Computer Science theory on the subject, and there has been for about twenty years. Many professional databases designed today can work in a distributed manner and almost all of them are capable of scaling.
        • by anonymous cupboard ( 446159 ) on Wednesday December 04, 2002 @05:27AM (#4808791)
          I think you misundertand, not only must the transactions be uniquely identified (this is easy), but the original ordering must be maintained (hard). Simply time stamping transactions would not be enough.

          Currently the TSN is assigned through a cluster-wide 'semaphore' maintained by the distributed lock manager. However, one system at any time has the responsibility for logging the transactions (although the job can 'fail-over' to any other system. The design of the system means that every state change must be written out of the system so that if an individual system dies, the others can continue from the same point with no loss of information permitted unless a major disaster occurs.

          Oh and you can forget databases as they tend to be rather slow. Recovery unit journalled ISAM files was the only way fast enough.

          There may be a lot of CompSci Theory on this subject but there is very little that is relevant when you want a highly reliable system with several thousand transactions per minute.

          Oh and this particular system is running the trading at CBOT, EUREX and XETRA.

          • by bob_dinosaur ( 544930 ) on Wednesday December 04, 2002 @06:37AM (#4808982)
            For the benefit of /.ers: CBOT = Chicago Board of Trade. Futures and options trading, mainly on commodities (corn, wheat, etc) and equities. XETRA = European electronic securities trading system. EUREX = Largest European equity derivatives exchange. These are places where downtime can easily be measured in millions of USD per hour. The London Stock Exchange has had only one unplanned outage in the last decade. That's the kind of reliability these systems require, and it ain't easy to achieve. So, when you do, you tend to leave well enough alone...
          • Relevant theory: (Score:3, Informative)

            by fireboy1919 ( 257783 )
            Database design theory comes to mind.

            The goal is to pick fields & tables such that:
            1) Locking is minimal
            2) Dependencies are minimal
            3) Storage size is minimal
            4) Records are meaningful

            The main technique involves decomposing a database to a minimal architecture based upon all possible elements in the database, and then building it back from the basis to the desired state.

            It gives you specific knowledge of the conditions by which transactions may require waiting and a way to characterize that waiting, as well as how to reduce the number of transactions you need for a given task.

            Of course, that's just the database design theory that one can apply. There's also the distributed information theories that can be applied. The most primitive approach to this is to use time stamp semaphores, but it can be extended beyond that. There is actually an area of database dependency resolution devoted to making locks. I imagine the "distributed lock manager" you spoke of uses it to minimize the amount of information needed to be locked at any given node.

            In both of these cases (distributed info theory and database design theory), the formalism sprang from necessity - people invented creative ways to improve how their mainframe worked, and they used the formalism to describe it. I think it might even be right to say that without using the CompSci theory, you probably won't get a terribly reliable system. You'll get a kludge - it'll work, if you're lucky.
            • The problem is in a system where a piece of information is universally relevant, say how much money I have available to trade with and although the market may be decomposed into a number of order-books, i.e., for each product. However, we still have to be certain that I can afford to buy both 'apples' and 'oranges' whilst maintaining the cash in one place. Another issue is that of the relationship between products, for example I may have a CALL and a PUT option on BMW, the option may expire at three month intervals over the next year or so. People want to make trades made up of combinations of CALLs and PUTs in a product with different expiry dates and strike prices.

              This means that it isn't possible to split the option over several systems, it must match on one system in case of combination trades. If it happens to be a big day for that product (say Annual Report Time), then volume will be very high. If it is an interesting day for the economy, say election time, then whoops, there goes our performance across all products.

              Now if a transaction should fail, it becomes very important (legally so) that all transactions are unwound in the order that they were made.

              The distributed lock manager was rather a neat piece of technology that Digital came up with for clustering VMS. It is sufficiently neat that there is a project to try and emulate it for Linux, interestingly enough one project from IBM. It allows for five different levels of lock to be held on a resource and each lock to be associated with a value. VMS uses it extensively for their clustered file system (one of the better ones). We use it for hierarchical locking of the order books (each product, CALL/PUT and strike for options and expiry/delivery date combination). The order books are sorted in price/time priority.

              I have built smaller/simpler systems for other markets using databases and PC servers, using modern techniques. However looking at the monstrosity that I started working on about 12 years ago, I can't think of radical improvements without changing the exchange regulations, particularly with regards to those pesky regulations. I guess the best would be to convert it to Linuz but run it in multiple VMs on a Z-series mainframe.

    • by Anonymous Coward on Wednesday December 04, 2002 @05:09AM (#4808737)
      I think you missed one of the main points of the article. Mainframes are _IO_ machines, not compute engines. Mainframes are designed to deal with enormous streams of data, not huge numbers of calculations.

      Clusters like google can give you enormous compute capability, and a form of redundancy, but they can't give you the type of error checking and correction done in the mainframes, like the self-checks done by the paired CPUs. (At least not practically.)

      A couple of years ago I read an article that pointed out that todays desktop PCs have equal or greated CPU power than a 1970s mainframe. But when you measured IO capability, the mainframe would still wipe the floor with the PC.

      Theres little wonder in that. Look at all the IO channels and processors that the mainframe has. Instead of moving every byte between peripherials with the CPU, the mainframe tells one of its IO processors: "Move that data for me, and tell me when its done."

      A typical task for a mainframe might be (every night): Read the financial records of my 10 million customers with their average of 3 accounts, 8 mutual funds, etc. Inactivate closed accounts. Activate new accounts. Put in all of the deposits from cash, checks, wire transfers, refunds, etc. Subtract the withdrawls from cash, checks, wire transfers, refunds, etc. Update the number of shares in the accounts. Now apply interest to every account. Find and report all accounts that are: overdrawn, below minimal balance, over limit. Apply penalties. You get the picture. Even if you could do this with a cluster, all that you've done is move the point where the massive IO occurs from the mainframe to a huge, expensive, database cluster to service all of the IO. (It won't be on MySQL either.) Might be simpler than a mainframe. Proabably not.

      Google uses the large number of systems for more than redundancy. It uses them for caching its database in ram. They figure that the extra speed from ram caching reduces the total number of systems that they need. So, perversely enough, they have a lot of machines to save them from having even more machines.

      I'm happy letting google/SETI/Folding/etc.. search, crack, whatever.

      I want a mainframe handling my bank account and mutual funds.

    • by passthecrackpipe ( 598773 ) <passthecrackpipe AT hotmail DOT com> on Wednesday December 04, 2002 @06:30AM (#4808959)
      Well, besides the fact that this is a carbon-copy post from the ars forum, you don't get the point of mainframes at all. You simply can't compare PC's with Mainframes. They have different properties, different design criteria, and pose different solutions to different problems. Sure, with clusters you may reach a higher then usual level of uptime (BTW, clusters are not new, and are not "arriving from the pc world" - your post makes me think that your closest encounter with technology is staring at Lara Croft's boobs on your playstation 2), but it is not just about uptime. The fact that mainframes are so reliable is just an interesting selling point, not the main feature (something the article didn't get out properly).

      The main feature of mainframes are the staggering amounts of data it can move. The mainframe is like the bulldozer of the Computer world. The CPU is terribly slow at certain operations - run X11 on it, and have 20 people log in - say bye bye to your performance. But the amounts of data it can move, and the speed with which it can move that data is nothing short of amazing. Oh, and let's see you doing processor lock-stepping on a PC-based cluster.

      I can't believe you got modded up to +5 for this drivel....

    • One of our customers is already doing this: customer care, billing and metering software is all run on PCs and a few Unix boxes (Suns, I believe). The setup, using redundant clustering, is reliable enough for metering, which cannot be allowed to go off the air or lose transactions or there will be all sorts of financial and legal trouble. They still run some stuff on a mainframe but that will be phased out in a year or so. For storage, they went with a new IBM SAN because of its incredible reliability.

      The nice thing about technology such as RAID and clustering for the lower-end hardware, is that now we can make our systems as reliable as we need them to be for our particular situation.
    • by IGnatius T Foobar ( 4328 ) on Wednesday December 04, 2002 @10:54AM (#4809969) Homepage Journal
      But there is an interesting possibility arriving from the PC world: clusters. PC hardware is so staggeringly cheap that it's becomming viable to run enterprise applications accross clusters of PC's, viewing each PC as un-reliable and likely to fail.


      A cluster of PC's isn't even in the same league as a mainframe. PC operating systems aren't designed for that type of thing. Anyone stupid enough to try this is probably also stupid enough to try using Microsoft Cluster Services. And anyone who has seen Microsoft Cluster Services in action knows that it only protects you from hardware failure --- if Windows fails (and we all know that Windows is far less reliable than the hardware it runs on), you get two parallel blue screens. (Don't mod this up as 'Funny' -- I'm dead serious here.)

      Linux is reliable but most of the clustering software we have available for Linux is geared more towards parallelizing an application and getting more work done with more machines, than towards N+1 reliability. You need to be able to have processes maintain their state in parallel on multiple machines -- not an easy thing to do.
    • Take google for example. Their software flags failed units, brings them offline, and once a week they go pull them out of the racks and replace them.

      Pardon my pessimism, but that is not reliability.

      So Google can remove broken units and replace them later. But what happens to the work that was happening on that unit when it broke? Someones query gets lost, and they have to submit it again. No loss in googles case.

      On the other hand, a Bank could not allow even one transaction to be lost to such a failure. In the mainframe discussion they talked about how even a running program, even an individual instruction, on a failed unit could be saved, moved and restarted on another unit. You can't do that on a PC.

      A web server can be parallelized easily, but database servers are not so lucky. Sure, Oracle, DB2 and others can be run on multiple machines in parallel, but if one of the units goes down, so does its disks. Disk failover is not as seemless as the Mainframe Channel failover.

      True seemless failover, down to the instruction, is something that takes a lot of effort. And there are some places where it is vitally important. Web servers are just not that vital.

  • IBM centric? (Score:2, Interesting)

    by absurdhero ( 614828 )
    Remember that IBM commercial showing the whole server room with mainframs and all being replaced by one small IBM linux box? Sounds like that is IBMs dream, not reality.
  • by Tsar ( 536185 ) on Wednesday December 04, 2002 @02:48AM (#4808348) Homepage Journal
    I talk to people all the time who can't believe that mainframes are still essential to our info infrastructure. I'm going to start sending them to this site. Any other suggestions for good primers, especially ones this short and sweet?

    I really liked this line in the section about modern IBM mainframe reliability:

    Each CPU die contains two complete execution pipelines that execute each instruction simultaneously. If the results of the two pipelines are not identical, the CPU state is regressed, and the instruction retried. If the retry again fails, the original CPU state is saved, and a spare CPU is activated and loaded with the saved state data. This CPU now resumes the work that was being performed by the failed chip.

    Try that with your dual-Xeon server!
  • by EvilJello ( 577315 ) on Wednesday December 04, 2002 @02:48AM (#4808349)
    ...with the rotational energy of Douglas Adam's coffin. That was the most painful and continuous referencing-HHGTG-for-referencing's-sake I've read in a long time.
  • by bobobobo ( 539853 ) on Wednesday December 04, 2002 @02:48AM (#4808350)
    ...the dinosaurs were already extinct.
  • That the article has a P4 3.06GHz Ad on right hand side.
    On the left you have the past.... and on to the right...the present
  • by Shadukar ( 102027 ) on Wednesday December 04, 2002 @02:50AM (#4808356)
    Imagine...

    You're a big organisation thats been in business for 50+ years. You are in the biz of manufacturing Weezops (or whatever) for the various Gazaah(wtf?!) industries.

    10-20 years ago you paid a big buttload of cash for a mainframe.

    Today this main frame is chugging away. Occasionaly you need to screw in the vaccum tube, or maybe fill up the cooling liquid and in winter its a little noisy.

    However, your little dino is happily chugging away, calculating whatever you want it and doing whatever it was that you got it for.

    Its working. Its doing that you paid big cash for. You dont need it to make coffe, play videos, particpate in distributed.net or send spam. You want it to chug along. And its doign it.

    Why change? Why pay another buttload of cash because someone is telling you "whoa, what you got here? an oversized heater?! pay another buttload of cash for this new machine that will do everything its doing PLUS play mp3s for you, make coffe, crack encryptions, search for ufos and connect your grandma to the net!"

    I dont think so.
    If a machine, no matter how old, is working, and you paid a lot of cash for it, no business will get rid of it to get something new just because its new/flashy.

    Just like banks and credit card companies who still use systems like GlobeStar, 8 colors text based account management software written over 10 years ago. Why? because it does the job. Pull down menus, icons, angry slad shooting out of cdrom drives, live video straming, its all nice and cute, but if you have somethign that works, does the job the way you want it and how you want it, there's no need to change.

    Sorry its so drawn out and long, but thats the way i see it. Plus I am sure you enjoyed the sleep :)

    In words of a famous comedian, "Those are my ideals, if you dont like them, I have others"

    • While I agree with your statements 100%, there are reasons to switch even when the hardware works. MAINTAINABILITY. Theres a shop out here that is advertising 24/7/365 for Programmers who are both experts in AIX and COBOL (8+ years only).

      That position has got to be damned near impossible to staff.

    • by securitas ( 411694 ) on Wednesday December 04, 2002 @03:41AM (#4808533) Homepage Journal

      The argument for what I call economic inertia is a good one, especially with corporate shareholders these days demanding that management squeeze everything they can out of every dollar and stretch every last penny as far as it will go.

      A mainframe that does everything that you need it to do (and more) and works well with your company processes is worth far more to you than the investment of time and resources in an untested, unknown system that may or may not work. Remember that new systems don't go online until after extensive use and testing in parallel with the current one (if it's done correctly). That means duplication of efforts and resources.

      Anyone who has worked at a company that builds enterprise-scale applications or mission-critical solutions knows that when the customer has an XYZ mainframe, you'd better have applications that support XYZ or you'll find the contract goes to your competitor who does. It's not an option not to support it.

      Unless there is a strong business case for moving to a newer technology, mainframes will be with us for quite a long time.

      A hint to the coders out there: the number of people who know and understand these systems is declining. There's a mint to be made if you can deliver services to support them.
  • by extagboy ( 60672 ) on Wednesday December 04, 2002 @02:50AM (#4808360) Homepage
    the reason they won't yet die is that they are incredibly reliable. If you need a computer that has to work all the time you need a mainframe. Now, the software isn't the funnest thing to work with and you don't get pretty graphics (for the most part) but nothing can compare to its rock solid reliability. Another reason is that the hardware itself runs forever. Most of the older stuff still running was built to last. Unlike alot of today's hardware that is only built to last until it's obsolete.
  • I agree with this post [kuro5hin.org]
  • Why "dinosaur"? (Score:5, Interesting)

    by sql*kitten ( 1359 ) on Wednesday December 04, 2002 @02:52AM (#4808365)
    Ever wonder why these things are still around

    Mainframes aren't dinosaurs, and never were. They are the most advanced, most capable hardware available, and the proving ground for architectural innovations that eventually filter their way down into workstations (like using a crossbar switch instead of a primitive bus). Sun's dynamic systems domains, considered very advanced by the Unix world are still many years behind the mainframe LPARs, and Sysplex makes SunCluster look like a silly toy. User-mode Linux and Beowulf don't even come close.

    Really, you should be asking why obsolete technologies such as the bus are still used in PCs, and why PC technology lags so far behind "real" computers.
    • Re:Why "dinosaur"? (Score:3, Interesting)

      by dohcvtec ( 461026 )
      Unfortunately, I think many people see mainframes as "dinosaurs" because they don't have the features of the PCs they so small-mindedly revere.

      What? That big, expensive thing doesn't even have USB ports? Can I watch DVD movies on it? No? What good is it then?

      The submitter of the article had a condescending attitude about mainframes, almost like he was begging the question of whether mainframes should exist anymore.
  • aka 'real computing' (Score:4, Interesting)

    by Gavin Rogers ( 301715 ) <grogers@vk6hgr.echidna.id.au> on Wednesday December 04, 2002 @02:54AM (#4808374) Homepage
    how many of us have walked into a bank, an insurance company, telco, a large parts wholesaler (any industry) or any other heavy user of 'serious IT' hand seen the clerk using either an original green screen IBM 3270 terminal or a PC running a terminal emulator?

    The IT industry has moved on, but these sorts of comapnies are very stuck in a 'if it aint broke, don't fix it' attitude (especially banks).

    Whatever the reason (technically valid or not) the managers of these dinosaurs can't see that their 100,000 sessions or whatever it is running at all - even if their hugely custom software ran at all - using a huge cluster of cheap PC servers (oh look, we're back to a mainframe again!)

    I think I'll be getting my power, insurance, phone bill, bank statements, car registration bills generated with one these old machines for a very, very long time to come.
  • by Anonymous Coward on Wednesday December 04, 2002 @02:55AM (#4808378)
    MAINFRAME repairs you!
  • Can you say "banks"?

    When an hour of downtime would cost you millions of dollars, no question about it: you get a mainframe.

    For the ones who don't read the article, a quick excerpt so you know what kind of availability we are talking about:

    "[...] today's [mainframe] systems [are] so reliable that it is extremely rare to hear of any hardware related system outage. There is such an extremely high level of redundancy and error checking in these systems that there are very few scenarios, short of a Vogon Constructor fleet flying through your datacenter, which can cause a system outage. Each CPU die contains two complete execution pipelines that execute each instruction simultaneously. If the results of the two pipelines are not identical, the CPU state is regressed, and the instruction retried. If the retry again fails, the original CPU state is saved, and a spare CPU is activated and loaded with the saved state data. This CPU now resumes the work that was being performed by the failed chip. Memory chips, memory busses, I/O channels, power supplies, etc. all are either redundant in design, or have corresponding spares which can be can be put into use dynamically. Some of these failures may cause some marginal loss in performance, but they will not cause the failure of any unit of work in the system."
  • by ackthpt ( 218170 ) on Wednesday December 04, 2002 @03:03AM (#4808410) Homepage Journal
    Many times we've tossed similar thoughts around at work, when would PC's replace big iron. Well, CPU speed isn't all it's cracked up to be. It's like a hamster going 3,000 RPM on a treadmill. Fast, yeah, but it's still a hamster. PC's are firmly geared toward single user, desktop apps, even x86 servers take a lot of money to measure up to the HP 9000 we're running our development system on.

    I'm sure the humblest x86 can now run rings around old PDP 11 and IBM 360 systems, but it's still amazing how fast some parts of those old machines were, including core memory swap disks.

    • I remember seeing a quad-486 box running SCO,, the beastie ran an entire dispatch call center for a few hundred operators, without any problems. (well other than the lp needing to be reset after large printjobs...)

      At work we have a few hundred sparc-20's (modified 1cpu), supporting thousands of calls at a time, and keep track of each packet for billing.

      The CPU's might be slow, really slow compared to 3ghz P4's, but they do the job just as well, just as well the day they came out, all those years ago.
  • by codepunk ( 167897 ) on Wednesday December 04, 2002 @03:06AM (#4808419)
    The terminal interface is the most efficent human interface designed to date for data entry. I have never seen a GUI app that can come close to the user efficency of the ole mainframe terminal interfaces. That combined with the scalability, reliablity and ease of maintenence will insure that the mainframe will be around for yet a very long time.
    • I assume you are posting from Lynx or some other terminal. Anyway, fixed rows and columns of text are not intrisically better than any other interface.

      A GUI is sometimes unavoidable. Sometimes you need the extra flexibility (ie: to be able to put arbitrary dots on the screen as opposed to having to pick them up in Tetris Like fashion from the character set (pallete?).

      GUI and Terminal are complementary (for example, I am better of having 6 terms under a GUI system than having only 1 terminal at a time).
    • by Frater 219 ( 1455 ) on Wednesday December 04, 2002 @12:03PM (#4810546) Journal
      The terminal interface is the most efficent human interface designed to date for data entry.

      A couple of weeks ago I had the unpleasant experience of going to the dentist four times in ten days. (Slashdotters note: this is what happens when you avoid going to the dentist for three years.) However, whilst sitting in the waiting room in terror over the prospect of being assigned the newbie of the two dentists, I observed a curious phenomenon in progress:

      ... the elder receptionist training a new hire in using the office automation system.

      I was a little bit surprised when I noticed that this system wasn't made of Web forms -- though the systems on the desk were Wintel PCs, they weren't running Internet Explorer. Nor were they running a GUI front-end to a database, some PowerBuilder or MS Access widget conglomeration. No, the application running on those PCs was ... an IBM 3270 emulator.

      "There you go. Now move down to 10:00 ... now F10 that ... and hit F6 to print."

      From the dialogue between the two receptionists, I could tell several things about this application. First off, it certainly required and expected a certain amount training to use. To submit a form to the mainframe (located at a distant data center) required hitting F10, not clicking on a "Submit" button. There was no concession here to being "intuitive" -- the trainee simply had to learn that F10 means "submit form".

      Yet this was consistent -- F10 always meant "submit form", at every stage of the workflow. (So much so that the elder had made "F10" into a verb, as you may have noticed above, meaning "to submit form".) No unexpected dialog boxes came up with panicky but unnecessary messages, needing to be clicked away. The application's behavior created a consistent, predictable, learnable workflow. The elder receptionist spoke with complete confidence about the system's behavior, though she was certainly not an "IT person" -- in however many years she had been using it, I suspect it had never failed her once. This was not an application that she expected might crash or do something stupid and eat an appointment. Nor had it been "upgraded" three times in the past year to a version with fancier and completely unrecognizable widgets.

      Now, I work in IT. I spend all day with Unix, Windows, and Mac users. I also make a point of observing people's interactions with other data systems -- Windows-based supermarket cash registers, handheld card scanners at conferences, information kiosks at tourist attractions, and so forth. Rarely if ever do I hear the sort of quiet confidence in the computer's behavior which I've observed in end-users of mainframe applications.

      This is not "computer as irascible demon, seeking to lash out at its summoner," like Windows. It isn't "computer as consistent and friendly but sometimes fumble-fingered servant," like the Mac OS. And it certainly isn't "computer as Necronomicon," like Unix.

      It just works. So of course its users depend on it.

      • Your observations are very true about a lot of old text-based applications -- they may not be "intuitive" and will require some rote learning (a skill unfortunately no longer taught by our educational system) but once learned, they STAY learned, and nothing unpredictable ever happens. And with the better-designed apps, a particular key is ALWAYS the same function no matter where you are in the program. No rude suprises to disrupt your workflow.

        BTW, the keystrokes for WordPerfect for DOS were taken partly from old mainframe conventions (I've been told that's why F7 is "Exit" in WP and many other apps).

      • Your points are well-taken, but there is no reason why any PC/Mac/Unix/Windows application could not work that way. The issue is more cultural and standards based more then what the software will actually do.

        Since mainframers culturally think in terms of building pyramids and the smaller machine cultures strike me as building strip shopping centers, it shouldn't surprise you but there is no reason you couldn't be as consistent with the mammal machines.
  • by quantaman ( 517394 ) on Wednesday December 04, 2002 @03:07AM (#4808421)
    Just post a link to one of those suckers on /. we'll see who won't die in a minute!!
    • Re:Won't die huh? (Score:5, Interesting)

      by LadyLucky ( 546115 ) on Wednesday December 04, 2002 @04:39AM (#4808682) Homepage
      Just post a link to one of those suckers on /. we'll see who won't die in a minute!!

      And from the article:

      The total I/O throughput capacity of the current z900 mainframes is no less than 24GB (that's bytes, not bits) per second. I have not personally had the opportunity to benchmark the performance on these latest systems, but while theoretical numbers can sometimes be misleading, I wouldn't be at all surprised to see a z900 performing as many as 100,000 I/O operations per second.

      Immovable object, irresistable force, anyone?

  • Hey! (Score:2, Funny)

    by ActiveSX ( 301342 )
    Parallel Sysplex. Damn, what a cool name.

    ***ActiveSX files a patent on "Imagine a Parallel Sysplex of those" posts.
  • by arvindn ( 542080 ) on Wednesday December 04, 2002 @03:14AM (#4808446) Homepage Journal

    Four decades of years ago a group of hyperjobless pantemporal employees at IBM got so fed up with the constant calls for tech support from moronic users... that they decided to sit down and solve their problems once and for all.

    And to this end they built themselves and the world a stupendous supercomputer encased in a very large steel framed box the size of a small city. It was so amazingly intelligent that as soon as its DSADs had been connected up it started from I think therefore I am and managed to deduce the existence of P2P and the great wiki before anyone managed to turn it off.

    On the day of the great turning-on, it said: "What is this great task for which I, the Mainframe, the second greatest computer in the Universe of Time and Space, have been called into existence?"

    "The second ? There must be some mistake," said the programmer. "are you not a greater computer than the great Echelon at NSA which can predict acts of terrorism a year ahead in a picosecond?".

    "The Echelon" said the Mainframe with unconcealed contempt. "A mere abacus - mention it not."

    "What computer is this of which you speak?" he asked.

    "The greatest computer in the universe", answered the mainframe after seven and a half years of comtemplation, "is the Beowulf ".
  • by Ryu2 ( 89645 ) on Wednesday December 04, 2002 @03:18AM (#4808457) Homepage Journal
    IBM and others have demonstrated the ability of mainframes to act as virtual machines, using hardware monitor techniques a la VMWare, to simultaneously run thousands of copies of Linux, AIX, or other OSes. Because each OS is running ON TOP of virtualized hardware, the security is pretty much airtight, and it's just like having thousands of actual machines without dealing with the space, etc. issues.

    This technology seems quite promising for data centers, etc, and will probably ensure the mainframe stays around for a long time to come.
  • by Overcoat ( 522810 ) on Wednesday December 04, 2002 @03:20AM (#4808465)
    My state library system still has it's database running off an old mainframe from the late 80's. The card catalog search terminals are these funky old greenscreens.

    So a couple months ago I went to apply for a new library card (haven't used the system in like 10 years). When I turned in my application, the Librarian ran my info through the system and informed me that I had an eight dollar overdue book fine outstanding from 1987. Ouch. Place was pretty crowded, too, she could've said it in a quieter tone of voice...
    • the Librarian ran my info through the system and informed me that I had an eight dollar overdue book fine outstanding from 1987. Ouch.

      Think that is painful, wait until they bill you for the interest and the cost of carrying that info for all these years.

      Another MF story: I worked temporarily at this gov place that had a mainframe. I once overheard the mainframe manager complain that revenues for computer time were down when they upgraded the machine because it could do more per slice of time. He actually decided to add a multiplier to the billed CPU time so that the revenue was the same. IOW, the clients (internal) were not going to get any savings from the newer technology. Sneaky.

      How do non-mainframes track computer usage for billing, BTW?
  • by DarkRecluse ( 231992 ) on Wednesday December 04, 2002 @03:22AM (#4808478)


    Perhaps a punch card virus... Then again, perhaps it will be when the smartest people in the world succumb to the growing ideal of technology for technology's sake.

  • by Anonymous Coward
    In 1998 I had the opportunity to take a tour of six hockey-rink sized rooms of mainframes and tape drives used by one of the main US travel reservation systems. In reality there weren't that many actual mainframes - most of the space was taken up by tape drives. Above every machine was a sign specifying the machine's MIPS rating.

    The signs had numbers like 20, 43, sometimes as high as 60. The employees were especially proud of the 60s, explaining that each one cost more than 1 million dollars.

    At first I assumed I must not have understood. I asked whether MIPS really stood for millions of instructions per second. They said yes. Then I asked what kind of instructions they meant: things like add, load, etc? Yes.

    Finally I pointed out that my (at that time) $4000 dual 200 mhz Pentium Pro was rated at much more than 60 MIPS. I don't think they quite comprehended this.

    By now every travel reservation system is ditching mainframes as fast as they possibly can and replacing them with racks of PCs or medium-end Unix workstations. By spending 1/50th as much money they get orders of magnitude more useful computation: those nice low-fare-searches you see on Orbitz and Expedia run on PCs, not mainframes. I've been in all the other travel reservation systems complexes since my 1998 visit and more and more you find little stacks of cheap "low end" machines doing the heavy lifting.

    The reliabilty claims for mainframes are very deceptive. Yes, the computers stay up. But the software has bugs just like any software and data lines go down and the mainframes start dropping transactions left and right when they're overloaded. DASD's are multiported but top out at some low number just as any multiported device does, so mainframe-based databases often can't be extended beyond some point because the database drives simply can't be connected to any more machines. In the PC world we'd buy more machines and drives and maybe live with a little data incoherency but in the mainframe world eventually things just die because the hardware was built for everything but cheapness and power.

    The general mainframe design is essentially targeted at the application profile of a static-page webserver. Simple programs, quick data access and throughput, no computation. They are utterly unsuitable for any computationally demanding task.

    • By now every travel reservation system is ditching mainframes as fast as they possibly can and replacing them with racks of PCs or medium-end Unix workstations. By spending 1/50th as much money they get orders of magnitude more useful computation: those nice low-fare-searches you see on Orbitz and Expedia run on PCs, not mainframes. I've been in all the other travel reservation systems complexes since my 1998 visit and more and more you find little stacks of cheap "low end" machines doing the heavy lifting.

      This is simply not true. I work at a company that uses 390 mainframes and TPF [ibm.com]
      to handle travel reservations for airlines. When you use Obitz or Expedia you are using a pretty front end that gets all of its data from the mainframe.

      There have been some systems that offload stuff from the mainframe. Notably, Orbitz stores fares because it can apply its own search algorithms and find fares for more esoteric travel iterneraries than can be done on the mainframe and it can do fare searches faster and cheaper. Where does Orbitz get their fare data? From the mainframe where it is still generated and updated. Orbitz simply caches that data and updates their cache on a regular basis. From everything I've seen there have been more new applications and sub-systems hooked to the mainframe for data than have been moved off the mainframe.
    • by FJ ( 18034 ) on Wednesday December 04, 2002 @09:49AM (#4809525)
      First, let me say you are being misled.

      MIPS doesn't stand for million instructions per second. It stands for Meaningless Indicator of Processor Speed. IBM never liked publishing benchmarks for mainframes because they don't say the whole story.

      Mainframes don't run one application. They run thousands at the same time. I/O requests, CPU, and device contention are just a few of the many factors in a machine's speed. Just look at your PC. If you get the fastest dual Pentium, that just tells CPU spped. Put a slow hard drive and a 2MB video card, and any PC will seem faster. Mainframes are the same way so IBM has always been reluctant to publish numbers because businesses scream.

      As for the software being buggy you are exactly right. The difference is that some of that software has had 20-30 years to work out the bugs.

      And finally, yes, you are correct in saying that computationally demanding tasks using floating point multiplication and division don't perform well on the mainframe. Most businesses don't need to compute PI, so it was never a priority to IBM. Floating point addition & subtraction are very very fast if you write your application correctly.

      The really sad thing that holds processor speed back on the mainframes is the software licenses. On a mainframe, the faster the machine, the more your software costs. This made it possible for smaller companies to buy a little mainframe. The big customers pay the most. This means you never buy a bigger machine than you need, because the software license costs get more expensive and no business wasts money.
    • by Zathrus ( 232140 ) on Wednesday December 04, 2002 @10:18AM (#4809702) Homepage
      And how much I/O can your PC do? Or a cluster of PCs? Nowhere even close to what mainframes can handle... 24 GB/s -- take 96 Gigabit ethernet cards, stick them all in your PC (oh... you can't...), and then blast them at absolute maximum theoretical bandwidth.

      Of course, if you want to be "realistic" you'll have to use 128 Gb ethernet interfaces, since the maximum realized bandwidth on a full duplex circuit is around 1.5 Gbps.

      Oh... what's that? Your bus can't even handle the full bandwidth of a single Gigabit ethernet interface? Well, then I suppose your I/O is going to royally suck in comparison.

      Oh, and let's not even get on the topic of reliability... PCs just aren't. I'm a PC guy (I shudder at the thought of having to deal with mainframes), but I know their limitations. And while you're dead wrong about travel reservation systems running on PC clusters (they don't - the entire backend system is still on mainframes), whoop de doo if it was run on PCs. This isn't something where a node going down would cause major problems.

      If a node goes down on the air traffic control system, however, you can damn well bet there's problems. Big ones. Weighing several hundred tons, moving at a few hundred miles an hour, and disinclined to stay aloft while you take a few hours to get the system back up.

      maybe live with a little data incoherency

      Yes... a little data incoherency is no big deal. I'm sure the power grid will work just fine with a "little" incoherency. You don't mind a power plant (be it coal, nuke, whatever) having a massive cascade failure every couple years, right?

      I have absolutely no desire to ever work on mainframes -- the software in place is largely old and crufty, but by god it works. The hardware isn't old crap either -- you can buy new machines that will run the old software perfectly. And have capabilities that us PC weenies can't even comprehend. You realize that virtually every advance in the PC industry was tested and proven in the mainframe world first, right?
  • good grief (Score:2, Informative)

    by 3.2.3 ( 541843 )
    any new article on mainframes would necessarily be ibm-centric because ibm is the only mainframe manufacturer left on the planet. all the others have dropped out.

    legacy apps are not the reason mainframes hang around. legacy apps last because of the incredible ease of centralized management on mainframes.

    gone are the days of the dumb mainframe terminal, also. modern mainframe of today offer advanced graphics and windowed desktops. more often than not, the modern mainframe terminal is a low end pc with attached host print emulation.

    increased miniaturization only makes for better mainframes. modern mainframes are just well put together microprocessor clusters.

    mainframes make killer webservers: cheaper, faster, more reliable, smaller footprint, and easier to maintain than huge farms of pc servers.

    please.

  • The only viable Business OS on the PC was Microsoft Windows..... Linux + BSD varients have been around for ages, but it is only now the mindset of Business has started to examine these alternate OS's after M$ started to shaft them a little too hard. Now imagine you were the IT guy at a typical company a few years back, before the 'Linux revolution' had got started. Would you rip out the old, but amazingly reliable mainframes in favour of the Windows PC's that your staff complain about on a daily basis that the've lost files/ suffered crashes/ etc. My windows 2k Box crashes no reason every other week or so, it is not a problem, and gives me a chance to get a coffee while it reboots, but if a company's sales order was running on it, for those 2-3 mins, they could be loseing $$$ in lost revenue with their sales order team sitting doing nothing.

    I recon that Linux will start to replace these Mainframes in the future.... Linux is becoming a standard for server OS's.... IBM's line of iron is already running it [ibm.com]

    Tony.
  • by BrokenHalo ( 565198 ) on Wednesday December 04, 2002 @04:38AM (#4808680)
    I spent many years working on Big Iron between the late 70s and early 90s, and I (and my fellow contractors) always felt that the world was divided between those (like us) who could work on any machine, whether it be CDC, Burroughs, Sperry, Honeywell or whatever - and IBM-ites who never seemed to step away from the one manufacturer.

    I remember it used to be a cliche that "No-one ever got fired for buying IBM". Trouble is, I knew one IT manager in London who did get fired for doing just that at a Burroughs site.

  • by io333 ( 574963 ) on Wednesday December 04, 2002 @05:00AM (#4808722)
    ...that point being that big iron is not about processing at all, but rather about manipulation of huge quantities of data that would choke even a beowulf of beowulf clusters in a matter of seconds.

    But for those of you that still don't get it, here is a guide for the layperson:

    It might be a mainframe if...

    If you could kill someone by tipping it over on them, it might be a mainframe.

    If the only "mouse" it has is the one living inside it, it might be a mainframe.

    If you need earth-moving equipment to relocate it, it might be a mainframe.

    If you've ever lost an oscilloscope inside of it, it might be a mainframe.

    If it's big enough to be used as an apartment, it might be a mainframe.

    If it has ever had a card-punch designed for it, it might be a mainframe.

    If it weighs more than an RV, it might be a mainframe.

    If lights in the neighborhood dim when it's powered up, it might be a mainframe.

    If it arrived in its own moving van, it might be a mainframe.

    If its disk platters are big enough to cook pizzas on, it might be a mainframe.

    If Michael Jordan would need his entire annual salary to buy one, it might be a mainframe.

    If keeping all of the manuals together creates a fire hazard, it might be a mainframe.

    If it's so large that a dropped pen will slowly orbit it, it might be a mainframe.

    If it's ever been mistaken for a refrigerator, (or if the disk drive
    has ever been mistaken for a washing machine), it might be a mainframe.

    If anyone has ever frozen to death in the room where it's kept, it might be a mainframe.

    If it has a power supply that's bigger than your car, it might be a mainframe.

    If it has its own postal code, it might be a mainframe.

    If the operators considered the addition of COBOL to be an upgrade, it
    might be a mainframe.

    If it was designed before you were born, it might be a mainframe.

    If its main power cable is thicker than your neck, it might be a mainframe.

    If the designers have since died from old age, it might be a mainframe.
    • by Anonymous Coward on Wednesday December 04, 2002 @06:55AM (#4809026)
      You've hit the nail on the head in your first line: "Huge quantities of data". A modern, bog-standard mainframe has 24 GigaBytes per second throughput, between CPU(s) and persistent storage

      That's a lot.

      Your CPU-RAM bus on your PC has less throughput (DDR-SDRAM 266 is CA. 2.1 GB/Sec), and your CPU-HD path (via DMA to RAM) is a not-very-funny-joke compared to it.

      A cluster for similar throughput would hit the lightbulb problem (admin-monkeys running round swapping out burnt out PeeCees left-right-and-centre).

      MAINFRAMES SHOVEL SO MUCH DATA IT'S NOT FUNNY.

      And now Linux can run on them.

      Be afraid.

  • "Mainframe" (Score:3, Informative)

    by Anonymous Coward on Wednesday December 04, 2002 @06:24AM (#4808944)
    The etymology of mainframe is incorrect in the article. While nowadays "mainframe" is indeed used to distinguish big lumps of computers from smaller ones, back in the day the "box" or "chassis" of even a microcomputer was originally called the "mainframe".

    I have documentary evidence from the dawn of microcomputing to prove it. It was the Main Frame of the computer, to which one attached Peripherals. Microcomputers just had very small Main Frames.

  • by Early90sRetroGuy ( 607849 ) on Wednesday December 04, 2002 @06:45AM (#4809000)
    I just got laid off from an operator position at a large, old company that has invested a lot of time and money into their IBM AS/400's. Not exactly mainframes, but it's the same idea. They have been there forever, they're doing their job, etc. No problems with the machines at all. The only problem is that the developers are nearly all in their 60's and will probably retire soon. And most of this generation (and probably the last one) don't even want to look at anything in COBOL, RPG, CL, or whatever the system's applications are developed with, much less make it a career. Eventually these things will die because nobody will know what to do with them. In 10 years it will be damn near impossible to find people who will work with anything that isn't GUI-based. Chris
    • Sour grapes. The only reason there are not new mainframers is because of the ignorance and arrogance of the up-and-coming programmers, in my opinion. What happened to education? Computers are 1's and 0's. Yes, there will be a learning curve, but it only gets steep for the close-minded.

      I have Java programmers who whine for us to get a Linux LPAR, but when I try to talk to them about things such as filesystems, or anything which is fairly universal in the world of computers, and they are clueless, which shows they don't even know their beloved Linux (I love Linux, by the way).

      So, is it the frozen mindset of the programmers which is to blame, or the cads who are teaching them?

      And, c'mon... COBOL is EASY. Java has a much steeper initial learning curve.

      And COBOL is faster.

      I'm thick as a whale omelette.
    • by bungo ( 50628 ) on Wednesday December 04, 2002 @09:22AM (#4809373)
      Don't be silly. They're not dying out.

      While I get to play with Oracle, Apache, Java, etc, the group I work with is only 10 people, where as not 10 feet away from is one of the many groups of mainframe only developers.

      They have their 3270 emulators, program in COBOL, do some JCL, and there are a couple of hundred of them. Quite a number of them are under 30 (although there are also quite a few over 50).

      Alot of these mainframers here are on contract from a few main agencies. These people are full-time employees of the agencies - places like EDS.

      They're not dying out, because if they loose one, then EDS finds another monkey, trains it for a few months on JCL and COBOL and then puts them out on contract rates.

      There seems to be a never ending supply of these monkeys who exchange their life for a boring, stable, if not well paid, job.

      --
  • by MosesJones ( 55544 ) on Wednesday December 04, 2002 @07:42AM (#4809108) Homepage

    People are still buying the new mainframes and AS/400s (which should be lobbed in) especially now they run Java and new technologies.

    Why ? Because of the support staff you require to run one. Is Unix harder than Windows 2000 are the people cheaper ? With these beasts its a mute question because YOU WON'T EMPLOY A SYSTEMS ADMIN for your server. You will outsource all of that to IBM, and they will make sure it works.

    My favourite on this is being in a place with around 20 mainframes and AS/400s who had been asked to consider standardising on Windows going forwards. The IT manager's challenge to the sales guy was "How often does your stuff fail?" to which the sales guys asked "well when was the last time you had an expensive maintaince job on these servers".

    The reply was that 4 years previously an IBM engineer had called to arrange a time to visit to replace a disk from the server which might fail soon. 2 years before that one had phoned to arrange a time to replace a processor board which was not performing correctly.

    2 incidents on 20 machines in 10 years.

    They elected not to move to Windows for infrastructure.

    Then along came Java and suddenly you can buy these ultra-reliable boxes to run all of your newest and brightest applications.

    Unix might whup windows, but OS/390 is Lennox Lewis standing at the back of the room with Ali smiling while they watch the little boys fight.
    • 2 incidents on 20 machines in 10 years.

      And note that IBM called the system managers, not the other way around. The hardware notified IBM that maintenance was needed.

  • by constantnormal ( 512494 ) on Wednesday December 04, 2002 @07:47AM (#4809116)
    ... when the applications designs are flawed, turgid chunks of garbage that poorly attempt to mimic a bizarre corporate organizational structure that is changing next week.

    Hardware design always has been (and probably always will be) WAY out in front of software design, and yet people are all too willing to spend the odd extra million on hardware while putting as little effort into software as possible.

    In most companies they are clutching obsolete applications like life preservers, when in reality they are anchors.
    • by Loundry ( 4143 ) on Wednesday December 04, 2002 @11:57AM (#4810516) Journal
      In most companies they are clutching obsolete applications like life preservers, when in reality they are anchors.

      God knows you're right! When I worked at very-large-retailer-to-be-unnamed in the IT department I was floored by how much crappy software they had built on top of their hardware. I can't remember how many times I thought, "Why not just use CVS?" or "Why do we have to use this thing?"

      First, if you replace something that's working, even if it's working extremely ineffeciently, it might break. The perception of something breaking is about one trillion times worse to the PHBs and the execs than the perception of something working extremely ineffeciently, especially in a retail management mindset.

      Second, especially if you have legions of data-entry people trained to use the extremely ineffecient software, then the cost to replace and retrain is higher in the short-term than to stay with the extremely inefficient system. PHBs and execs, especially in a retail mindset, can't thing about long-term cost savings in IT becuase IT is already a "cost center," not a "profit center."

      In short, two reasons for bone-headed software in the enterprise: perception and cost. Mainly perception.
  • Mainframes aren't going away because they are actually cheaper to run. And regardless of what some posters have said, don't have vacume tubes. What thay do have is dynamic CPUs, HUGE I/O buses, Optical data connections, massive storage, etc....

    While some companies have poured cash down the drain in order to use the latest buzzword technology, smart companies use mainframes with COBOL/CICS/DB2. Train your people once and only once.

    What do webservers provide over this combination aside from pretty graphics? Not much. HTML based apps are the rich mans CICS. Granted, it isn't a glamorous career, but it is a VERY effective technology that is rock solid. Programmers that do PC work can't imagine working on the Mainframe. But it is very efficient.

    The tech world has come full circle. Client server was hot for awhile, but very hard to keep the clients up to date in a large organization and requires bandwidth of the GODS to transfer all the data around. Oh, lets go to web services. Okay. Now we are back to the mainframe model. The centralized server model is basically this (Webservers) = (Mainfraim /wo pretty graphics).
  • by xyote ( 598794 ) on Wednesday December 04, 2002 @08:32AM (#4809205)
    The key characteristic that is unique to mainframes is the scheduler. That is they will guarantee progress on any job no matter what the system load. No unix/linux system can make this claim. To guarantee progress on unix the system has to be under utilized. On unix if you have 99/100% cpu utilization, something is seriously wrong. On mainframes if you don't have 99/100% utilization, something is seriously wrong. The reason for this is hardware used to be hideously expensive, so users milked it for all it was worth. At any rate, mainframes can handle work loads that will simply kill unix. They may run a little slow but they won't just sit there and thrash.


    BTW, LPAR is just VM running in firmware. It allowed IBM to sell the advantages of VM (testing) to MVS customers who didn't want to "run" another operating system.

  • Don't forget the government's mainframes. The agency I work for is still highly dependent upon our mainframe. It has become a bit more distributed in the last few years, but all of our critical work is done through the mainframe.

    Don't you just love mainframe emulators as well?

  • by ChaoticCoyote ( 195677 ) on Wednesday December 04, 2002 @10:18AM (#4809698) Homepage

    A comfort zone is important to large, monolithic organizations. What works, works. Why change the old and reliable for something new and untried?

    Some of my best friends make their living writing COBOL for mainframes; attempts by their agencies or companies to move to "new" technologies have been costly in both time and resources. If a green bar report provides all the information an accountant needs, why rewrite the system to use fancy HTML output that adds nothing but pretty colors? If anything, many web based systems reduce the amount of information available to make room for lots of unproductive frippery.

    I spent the first 10 years of my professional career in COBOL on mainframes and minis -- CDC Cybers, VAX clusters, Honeywells -- doing some pretty boring stuff. I moved into PC programming 15 years ago, and I prefer it for a number of reasons -- but I'm not blind to the realities of the bleeding edge and the stupidity of modern PC software design.

    Mainframe applications tend to accomplish very basic tasks in a simple way; even 10 million line COBOL apps are pretty straight-forward. The focus is on reliability and accuracy, not buzzwords. PC developers have an alomost pathological lust for the bleeding edge -- which gives us pretty but buggy applications.

    On the PC, amid an embarrassment of riches, with more languages and tools than we can enumerate, we constantly throw out the old to chase the new. Windows would be as reliable as a mainframe OS if Microsoft spent more time on QA and less on figuring out how to make curved corners on plastic-looking window borders.

  • by Schnapple ( 262314 ) <tomkiddNO@SPAMgmail.com> on Wednesday December 04, 2002 @12:29PM (#4810765) Homepage
    I work at a fairly large university. We run our student information system on an AS/400 mainframe, and I work on the billing side of it. What's struck me about this place is that while the mainframe is old (circa 1985), the people are older still.

    Recently we added the ability for the students to pay their bills online via the web, taking a bold step into 1998, albeit four years late. In fact, we mainly only did it because another university in this state (the bigger one) did it, and we didn't want to look like we were behind. The software to do this literally just adds more layers to the mainframe process. That was easier than moving to a new system. While the seasoned web pro got to use ASP.NET and C#, I'm sitting here at the age of 25 writing COBOL from scratch to be able to post transactions he captures. That the process is disconnected and difficult to keep in sync no one seems to mind.

    They say that we're getting a new, web-based system, "in about six months". I'm still not sure if this means no more mainframe, but apparently the project has been six months away for about two years now.

    My coworkers fall into three categories - people younger than me who are still in school and are getting the heck out of here when they graduate, people my age who are married (like me) but they have kids and are completely stuck here, and people who are much older than me. One of my coworkers is literally a grandmother who codes COBOL and hates computers.

    And that's really the big problem. I'm sure COBOL and Natural (a pseudo-scripting language for the ADABAS databases we use) are fine languages but you'd never know it by the way they're used here. I recieved no training once I got here - I was literally thrown in with a vague premise of further training, only to have the promiser go on to a better job. I was able to swim and get promoted within fifteen months.

    People here aren't concerned with keeping their skillset up to date, they're more concerned with getting their kids to little league practice. The guy across the room from me is trying like hell to get a better job, but he's 56, divorced, in hellacious debt, he knows one thing (COBOL), and he steadfastly refuses to learn anything else. He's like the guy with a hammer who sees everything as a nail. He regularly gets turned down for jobs he's perfect for in favor of young, know nothing punks (like me).

    A few months back (for some reason) they gave us VB.net training. While everyone in the room looked terrified of object oriented programming, I was making shit dance across the screen and rewriting everything in C# for kicks. That we're a 80% conservative university that's terrified of change doesn't help things either. My coworkers are mostly more concerned with keeping the new stuff out so that they don't have to learn anything new before they retire.

    Now, I'm not saying that Mainframes are evil or that people's natural desire to stay the same is dragging anything down, but part of the reason Mainframes are still around is due to a complete reluctance to upgrade. Sure, at some point it will become inevitable, but most of my co-workers are ready and willing to put that off until after they retire.

    And I'm not saying that everything should always be re-written in "flavor of the month" language to run on "hardware platform of the moment", that's not practical either. I mainly think we're seeing the results of a generation and a mentality that started at the low end of the Moore's Law curve and attacked it like any other job. People here don't see programming as a passion, but that thing they do until they go home (not unlike people who sell radio air time or something trivial like that).

    As for me, I'm getting out of here as soon as I can.

"Ninety percent of baseball is half mental." -- Yogi Berra

Working...