Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
IBM Hardware

IBM's Mainframe Dinosaur Turns 40 384

theodp writes "According to an SFGate.com article, PCs were supposed to kill off the mainframe, but Big Blue's big boxes are still crunching numbers, posting sales of $4.2 billion in 2003. First unveiled on April 7, 1964, the IBM mainframe computer celebrates its 40th birthday this week with a sold-out party at the Computer History Museum." The SFGate article also reveals: "Doug Balog, an IBM vice president, noted that 70 percent of the world's data are still housed in mainframe computers."
This discussion has been archived. No new comments can be posted.

IBM's Mainframe Dinosaur Turns 40

Comments Filter:
  • Skynet wont be able to take over with just a bunch o' desktops...
  • by BWJones ( 18351 ) * on Monday April 05, 2004 @06:38PM (#8774149) Homepage Journal
    PCs were supposed to kill off the mainframe, but Big Blue's big boxes are still crunching numbers, posting sales of $4.2 billion in 2003.

    Well, there is a reason you still see COBOL jobs being posted from time to time. The IBM mainframe architecture was well designed and well implemented and to quote an oft used phrase: "if it aint broke, don't fix it".

    Of course they have made some improvements over the years, but these things are going to have a mighty impressive return on investment over the course of their lifetimes. Much more so than your average desktop PC which (if your running Windows) needs (is required) to be replaced every couple of years or so.

    • by adler187 ( 448837 ) on Monday April 05, 2004 @06:51PM (#8774313) Journal
      Or, "If it ain't broke... You aren't trying hard enough!" (according to Red Green that is)
      • by rgmoore ( 133276 ) * <glandauer@charter.net> on Monday April 05, 2004 @07:02PM (#8774443) Homepage

        And IBM takes yet another spin on that. Their view is "if it breaks, figure out why and change it so that it won't break that way again". Mainframes are very powerful and have great I/O, but their greatest strength is reliability. They have tremendous failover capability, can hotswap components so that they can keep running as they're repaired or upgraded, and are instrumented so if one does fail the cause can be traced and corrected. No, make that the cause will be traced and corrected. Whenever an IBM mainframe fails, anywhere in the world, IBM will hear about it and go to the trouble of a post-mortem.

        • by lucabrasi999 ( 585141 ) on Monday April 05, 2004 @07:26PM (#8774675) Journal
          but their greatest strength is reliability

          What does the "Z" in Z-series stand for?

          Zero Down-time

        • by Anonymous Coward
          Accuracy, reliability, and security, is only achieved on mainframes.

          I have always wondered why modern managers don't care about failures - resorting to re-boot and cross fingers.
          IBM goes after failures hammer and tongs, because DATA CORRUPTION is not acceptable at any price.
          Makes me wince thinking of Financial Institiutions on all MS - who cares about inflight I/O's, memory 'leaks', and glitches.

          • That is because most managers have never made an effort to compute what it would cost them if they should lose their data, and also do not have a clue what a downtime costs them.

            I have worked with someone who had done these figures, and he was never happy with downtimes, to say the least.

    • by gkuz ( 706134 ) on Monday April 05, 2004 @07:51PM (#8774900)
      Of course they have made some improvements over the years, but these things are going to have a mighty impressive return on investment over the course of their lifetimes. Much more so than your average desktop PC which (if your running Windows) needs (is required) to be replaced every couple of years or so.

      While I am also a fan of IBM mainframes (we've had numerous mainframes that have had up-time measured in years), in all fairness, they have to be replaced periodically as well. Not because they're no longer capable of doing the job, but because after a while, IBM will take them "off maintenance", or will take an old rev of the OS (or VTAM, or NCP, or CICS) "off maintenance" and it just turns out that the current supported level will not run on your box. IBM has to make money, too. And any company that can afford a mainframe and needs one to run its core business would no more run an unsupported OS than you would go to work without your pants. So maybe the upgrade cycle isn't as short as PC's, but I'd bet you have almost no chance of finding a 15-year-old MVS box running any business anywhere.

    • by Flexagon ( 740643 ) on Monday April 05, 2004 @08:03PM (#8775024)

      The IBM mainframe architecture was well designed and well implemented...

      Indeed. In fact, there are many now-old innovations in it that "newer" technologies still don't completely get, like a true virtual machine architecture. Such capabilities, relatively trivial to add if designed into the hardware from the beginning, are painful and inefficient to emulate if not.

      Then again, I don't miss hex-based floating-point!

      Of course they have made some improvements over the years...

      One of the more amazing at the time that I saw was a workable subset implemented in the '80s on a PC card. It turned an early IBM PC into a desktop mainframe for some applications.

      • by BrynM ( 217883 ) *

        One of the more amazing at the time that I saw was a workable subset implemented in the '80s on a PC card. It turned an early IBM PC into a desktop mainframe for some applications

        I remember some COBOL developers in the mid-90s using MVS on some specialized PCI cards. I used to have a bookmark to the vendor that made them, but it's long gone now. Instead, how about running ES/390 in an emulator [conmicro.cx]? Dust off those JES commands and have some fun IPLing on your PC. Now if only Storagetek made USB cables for the

    • Actually, IBM pushes WebSphere and DB2 on the mainframe. Indeed, they even push Linux in a partition on the mainframe now, too, and hypersockets for fast TCP/IP communications between partitions.

      I did some work for a large payroll company, and this was the platform IBM sold them for running mission critical payroll processing for its thousands of customers.

      This isn't about legacy application as much as it is about consolidating clustered applications into an easier to manage platform. Believe it or not, you can still do state-of-the-art software development despite the physical housing being a mainframe.

      We did all the software development on the PC. The mainframe was simply the deployment destination. This is one advantage of the J2EE architecture. This also ruled out .NET, as Windows didn't offer the stability that Linux offered on the mainframe. However, we did happen to use Windows on the PCs in order to be able to use Rational Rose. Barring Rose, which isn't needed in deployment, our development architecture was completely compatible with Linux.

      From a J2EE perspective, this eliminated the need to manage clusters in operation, as well as to develop for them. Clustering, despite its raves in the news, has a lot of production related issues that the mainframe solves. This is part of IBM's marketing pitch.

  • ...and they tend to deal with tape media a whole lot better/faster too
  • by account_deleted ( 4530225 ) on Monday April 05, 2004 @06:40PM (#8774178)
    Comment removed based on user account deletion
    • by Dan East ( 318230 ) on Monday April 05, 2004 @06:48PM (#8774287) Journal
      I do find that statistic a bit hard to believe. Especially when you consider the amount of information residing on terraserver, google, etc.

      Dan East
      • According to the terraserver web page, they have 4 nodes with 2TB each... that's not all that much data.
      • by Dirtside ( 91468 ) on Monday April 05, 2004 @09:07PM (#8775460) Journal
        I don't know about that. There's quite a lot of big old mainframes running weather tracking and analysis software, for example. The USGS, I believe, has a number of mainframes that collect several terabytes of weather data per day... and they keep all of it. Forever.

        There are quite a lot of such obscure applications out there (especially in the earth and space sciences) that gather titanic amounts of data. Even if Google cached all five billion web pages, and each web page was a megabyte (which is probably way overestimating), that's 10 petabytes of data (5 petabytes each for the pages and the cache). Now think about the thousands of mass-data-collecting computers there are out there, that (between them) archive more data than that every day.
  • by Thanatopsis ( 29786 ) <despain.brian@ g m a il.com> on Monday April 05, 2004 @06:43PM (#8774220) Homepage
    Mainframes are usually more robust, have a more developed architectures and in general are designed around a more stringent set of standards. Most mainframes have 24/7 use in mind. A friend of mine at NORAD talked about a PDP-11 with a 6 year uptime. Granted a PDP isn't a mainframe but those machines are architected with longevity in mind
  • by batkid ( 448363 ) on Monday April 05, 2004 @06:44PM (#8774230)
    While the overall structure of mainframes (OS, programming languages, etc.) have not changed much over the last 40 years, the actual guts of these computers have actually improved with the times (disk, computing capacity, etc.). Mainframes are much more suited for data warehouse and batch process applications then today's more "sexy" multi-tier architectures. The only downside to mainframe computing would be cost.

    I personally don't think mainframes will be gone... ever.
    • I can only imagine the future... .. when the power of the mainframes today could be contained within small boxes under your desk!

      Ohh, wait..

    • by Kozar_The_Malignant ( 738483 ) on Monday April 05, 2004 @07:23PM (#8774648)

      >the actual guts of these computers have actually improved with the times

      God, yes. You hardly ever see iron-core memory anymore, and punch cards are being phased out right and left.
      • Scary enough, we found a punch card, in pristine condition, the other day. It's a Fortran punch card, looks like F77, but might be an older spec than there (if there is one? I don't know). I would be amazed we still have it, but then again we have a paper oscilloscope (as in no CRT, just draws on paper) that dates from the 1950s, maybe earlier.

        I'll keep my pointies and clickies, thanks.
  • Consequently... (Score:5, Informative)

    by bc90021 ( 43730 ) * <bc90021 AT bc90021 DOT net> on Monday April 05, 2004 @06:45PM (#8774235) Homepage
    COBOL [cobol.com] is still in wide use. It is even being used with .NET [adtools.com], just to give you some idea of how widespread it is.
  • by stratjakt ( 596332 ) on Monday April 05, 2004 @06:51PM (#8774319) Journal
    Not only are they still around, the world is moving back towards a mainframe-ish approach.. Hell, a webserver is a mainframe-ish approach if you consider a browser a dumb terminal (which I do).

    Mainframe + dumb terminals:

    Code executes in one place (one machine to maintain from a software viewpoint). Code 'lives' with the data.

    Collaboration/groupwork/etc is a no-brainer. "Brenda bring up invoice #43223 and blah blah blah".

    Software is protected from users (for the most part).

    PCs + Fat/thin Clients:

    Code excutes all over. You wind up with versioning/dependency hell. It's a bitch to administrate. Just when you think everythings good, some jackass installs a swimming fish screensaver and you're back to level 0.

    Data winds up in multiple, disjointed, locations. Bleh..

    Where I work we installed, and still support (and will for a decade past the official HP EOL date) HP 9000 series mainframes. I mainly deal with moving that stuff to the PC world, and I can tell you, lifes a whole lot simpler when you dont have to worry about what version of the OS, etc, etc, etc is running on the client machines..

    We're looking hard at Windows Terminal Services - essentially a modern day mainframe implementation, complete with GUI. Or we could go multiple X sessions, but our customers aren't to thrilled with the idea of *nix..
    • by green pizza ( 159161 ) on Monday April 05, 2004 @07:15PM (#8774570) Homepage
      Take a good look at the SunRay terminals that Sun is offering. Rather than hack and patch Windows, they simply made a few modifications to X, most of the client-server tech was already in place.

      Thin Client Windows has been a nightmare, and it's only getting worse. One of the original incarnations, WinDD hosted by a Tektronix-modified version of Windows NT 3.5, wasn't so bad... Windows was simpler back then. But all of the "ease of use" and "zero administration" crap Microsoft and Citrix have built up since then has made thin client Windows a miserable beast to deal with. I know many administrators who swear a building full of plain PCs and a good Norton Ghost setup is easier to maintain.
      • I clicked submit too early... I should also point out that one potential solution may be to buy "thin" x86 workstations, just a cheap PC with lots of RAM, no drives, and a simple BIOS that supports netbooting. It's *nix, I know, but a netboot X terminal may be the way to go. Some scripts could be written to allow for local storage for those that need it. (Sun is doing the same thing with the SunRay, they have a USB storage patch now).

      • I know at least 1 site where term services is being used pretty effectively, even over the internet. It's being used for administration and technical tasks rather than end-user applications, but it seems to work well -- not much lag/net load, no issues with multiple sessions (up to about 6, a mainframe replacement this is not).

        Performance is much better than with VNC.

        If it has a failing, it's that the rules for when a session times out are kind of inscrutable and result in reconnects when a session is le
      • That's funny, we LOVE our Citrix environments, and so do our clients. We only have a couple of boxes per client we really have to worry about, backups are guarenteed to be centralized since the thinterms have no local storage, and best of all new terms are $50 new or $25 used. We generally have only a couple percent fat clients for those wierd but necessary apps that all customers seem to have which we don't feel safe loading on the Citrix farms. Best of all anyone with a web browser can get setup from home
    • by cbreaker ( 561297 ) on Monday April 05, 2004 @07:20PM (#8774615) Journal
      A web browser is a little more then a dumb terminal, but it is just a terminal none-the-less.

      Doing everything over dumb web browsers is okay and all, but it's not very user friendly for a lot of applications. In order to bring web apps up to the usability of a traditional application, you're still dealing with versioning problems on the clients, because the browers will have to become a lot smarter. Java can overcome many of these problems if you can write your apps in it. But again, what version of java you got on your clients? Where is the code executed?

      In a perfect world, there's on server and all the clients can run the apps without worries about versions. Unfortunatly we don't live in one.

      If you run a tight shop, and don't allow people to install screen savers that will bring you to level 0 (incidently, that same indivual could do a lot more damage if untrusted and left to run amuck in your mainframe..) you can actually put together a decent system using distributed servers versus a mainframe.

      In my opinion, it's all the way you manage the system. You can quite easily run a terrible shop whether you run big iron or PC servers.
    • by exp(pi*sqrt(163)) ( 613870 ) on Monday April 05, 2004 @07:42PM (#8774800) Journal
      if you consider a browser a dumb terminal
      Now things may have changes since when I was a lad. But when I worked at IBM many years ago we used 3278 terminals. They practically are web browsers, invented decades before Mosaic. The form based approach 3278s use is much more powerful than the character-at-a-time nonsense like vt100 and its successors. Once great advantage is that things like text editors were still quite usable when the mainframe was being hammered.
  • biased quote? (Score:5, Insightful)

    by dj245 ( 732906 ) on Monday April 05, 2004 @06:52PM (#8774337) Homepage
    "The SFGate article also reveals: "Doug Balog, an IBM vice president, noted that 70 percent of the world's data are still housed in mainframe computers."

    Is it just me or is that a bit of a biased quote? Its kind of like Steve Jobs saying that "Apples are the fastest computer on the face of the planet", or Bill Gates saying that "Windows is the most secure OS in the world". These statements may or may not be true. Studies may be done to determine the validity of the claims, but I would argue that ultimately most of the world's data is tied up in Girls Gone Wild DVD's. The point is that the makers of the claims have a bit of a personal stake in the claim, making them slightly more apt to being taken with the obligatory salty grain.

  • by Anonymous Coward on Monday April 05, 2004 @06:52PM (#8774340)
    "Doug Balog, an IBM vice president, noted that 70 percent of the world's data are still housed in mainframe computers."
    ..and Google stores the other 30 percent.
  • by craXORjack ( 726120 ) on Monday April 05, 2004 @06:54PM (#8774350)
    "Doug Balog, an IBM vice president, noted that 70 percent of the world's data are still housed in mainframe computers."

    The other 30% is porn and cookies.

  • by agslashdot ( 574098 ) <sundararaman DOT ... AT gmail DOT com> on Monday April 05, 2004 @06:56PM (#8774372)
    At my first startup, one of my first multipeople multiyear Java projects was a mainframe screen scraper ( TN3270 using AWT - example [developer.com] ). I was fresh out of college & totally unaware that mainframes still ruled the planet. Those two years & the huge revenues it brought led the startup to be acquired and made a lot of people really rich ( minus moi, ofcourse :(
    Lots of money to be made in desktop-mainframe connectivity.
  • Linux not mentioned? (Score:4, Interesting)

    by Alien Being ( 18488 ) on Monday April 05, 2004 @06:56PM (#8774382)
    This [techtarget.com] claims that as of the end of 2002, 15% of the mainframes IBM was selling would be running Linux.

    Has that number dropped off?
    • by Detritus ( 11846 ) on Monday April 05, 2004 @07:10PM (#8774515) Homepage
      I think they are talking about mainframes that run Linux as a guest operating system on a virtual machine. The real operating system is VM. VM allows you to create a large number of virtual machines, each of which can run Linux or another operating system.
    • by Jahf ( 21968 ) on Monday April 05, 2004 @07:14PM (#8774552) Journal
      While Linux has advanced in many places, most people who were interested in it on mainframes quickly realized that it didn't fit so good there.

      Major differences were required in the kernel to support a scalable Linux at that level which meant source code compatibility wasn't always reliable. This meant that even though it was Linux, you still had to have a core team trained up on the intricacies of the mainframe system and programming and so it is still costly (you may need 5 people to maintain the same # of machines that a mainframe can handle with just 1 operator, but the cost of salary for that 1 mainframe specialist may be close to 5 times the cost of the average web farm maintainer which is often just a kid in college happy to make triple minimum wage).

      Additionally many of the early Linux mainframe deals were for hosting services where the mainframe functioned as a place to store many many many Linux virtual machines, the end effect of that being that it didn't reduce over all system maintenance much except on the hardware level. The markets where many many many linux virtual machines are needed are often served fine by smaller hardware in bulk that can be updated regularly over time.

      It's not dead, but it definitely didn't live up to expectations that IBM set.

      Linux is still better suited for the mid-size and smaller hardware world. May change but IBM expected it to change very fast. Plus, 15% of new mainframes is not that large of a number. Most mainframe sales now are into existing mainframe users, it is not a growth market.

    • by PCM2 ( 4486 ) on Monday April 05, 2004 @07:55PM (#8774930) Homepage
      Note that quote references the number of mainframes IBM is selling. Most of the mainframes currently in use were sold years and years ago.

      That said, I've been talking to IBM about Linux on the mainframe recently and while I don't have an actual figure handy, I wouldn't be surprised if the number your source cited were true, and in fact there may be even more movement in the Linux-on-mainframe area than that figure suggests.

      IBM is marketing Linux on the mainframe primarily to existing mainframe customers who want to further leverage their investments there. Remember that mainframes tend to be very modular and upgradeable ... you need not replace the thing to see performance gains or new functionality. You can just buy some new parts.

      So IBM is selling a version of Linux that will run under zVM, its mainframe virtualization technology, as well as hardware modules that are basically PowerPC G5 units you can add to the base hardware for the explicit purpose of running Linux. (I don't think you necessarily need the add-on modules to run Linux, I just know that they're available.)

      This doesn't really have any benefit at all if you're running a compute cluster or any other application where the Linux boxes are running at high utilization all the time. The main purpose for this is consolidation of lightweight servers. Let's say you have a farm of a hundred Linux Web servers that mostly sit around idle, and the heaviest lifting they need to do is to hand off transactions for processing in the database on the zSeries mainframe. IBM suggests that you instead roll all those servers into virtual machines on the mainframe itself.

      Note that we're usually talking about a mainframe that's already in production use, here. You don't need to wipe your mainframe and start over with Linux. You can run Linux instances and z/OS instances at the same time. You gain the following advantages:

      1. You can now use the same staff to maintain those Linux "boxes" that you were already using to maintain the mainframe
      2. VM makes it pretty easy to provision new virtual servers as needed, and keep their configurations consistent
      3. You get the benefit of increased I/O -- the Linux instances think they're communicating over TCP/IP to some remote database, but really all the I/O happens using the in-memory channels on the mainframe
      Are these advantages compelling enough to make a lot of companies run out and spend the money on a mainframe? Probably not, especially with today's economy so focused on short-term gains instead of long-term ROI. But if you've already spent the money it could be pretty attractive.

      From my understanding, IBM doesn't really have a whole horde of customers yet, but I bet a lot of mainframe customers are evaluating the option.

      More information on this, as well as mainframe topics in general, in last week's InfoWorld: here [infoworld.com], here [infoworld.com], and the full PDF special report on mainframes here [infoworld.com].

      • by Dammital ( 220641 ) on Monday April 05, 2004 @10:05PM (#8775829)
        So IBM is selling a version of Linux that will run under zVM, its mainframe virtualization technology, as well as hardware modules that are basically PowerPC G5 units you can add to the base hardware for the explicit purpose of running Linux
        IBM does not sell any Linux distribution. They provide documentation for running one of your choice (i.e. SuSE or RHEL) and offers support for a fee.

        The S/390 port of Linux will run natively in a zSeries logical partition (or LPAR -- a builtin virtual machine facility). You can define between 15-30 LPARs in your complex, regardless of the number of physical processors that are present. I run twin z/OS images and one SuSE Linux server on my single-processor system, without the benefit of z/VM.

        There is no "PowerPC G5 unit", though you may be referring to a so-called "IFL" processor. This is a CPU that is only licensed to run z/VM or Linux. Since z/OS is charged on a per-CPU basis, you can save on software costs if you purchase additional IFLs instead of full-function processors. (This is only a licensing trick; both types of processor still run S/390 code.)

  • by Shivetya ( 243324 ) on Monday April 05, 2004 @06:57PM (#8774395) Homepage Journal
    Mainframes and Minis will be around a long time. To get PC based systems up to their level of reliability, ease of use, and maintainability would turn the PC based system into a MINI.

    I have 75 iSeries (As/400) that I oversee. You want to know how much time I spend per week checking up on them? Only an hour or so. I receive reports from the machines when they have problems. If one has a fault it is usually hardware and rarely does the downtime pass a few hours.

    Meanwhile the network group (read : uses PC based technologies) is always fixing something and has 5 people dedicated to it compared to two for the iSeries boxes. That doesn't count the PC-support group which supports desktops...

    We have 3 mainframes as well, some of the code from these machines has been in use since the early 70s. Some of the code migrated to the iSeries with little but header changes.

    But the best, the iSeries has been on 64-bit PowerPCs natively for 10+ years. Didn't have to recompile or change 99% of our code to do it. How long has the PC base world been struggling to get there?

  • by sstammer ( 235235 ) on Monday April 05, 2004 @07:02PM (#8774440)
    I guess this depends on how you define "data". The Economist [economist.com] recently described a Berkeley report that 3.5 to 5.5 *Exabytes* of data were produced in 2002. If you believe the unlikely proposition that Blue Glue is holding 70% of that new data, then you have to wonder why IBM only made $4.2B in selling mainframes to store and process that data.
  • Obso1337 (Score:5, Funny)

    by isomeme ( 177414 ) <cdberry@gmail.com> on Monday April 05, 2004 @07:02PM (#8774444) Journal
    mainframe n. An obsolete device still used by thousands of obsolete companies serving billions of obsolete customers and making huge obsolete profits for their obsolete shareholders. And this year's run twice as fast as last year's.

    - The Devil's IT Dictionary [isham-research.com]
  • SPF/PDF (Score:5, Interesting)

    by wardk ( 3037 ) on Monday April 05, 2004 @07:03PM (#8774451) Journal
    I used to hate the SPF/PDF interface, but after a decade of being forced to use (by employer) with the utter shit that is MS Windows, it's now just fond memories of something that WORKED. also, REXX did (and still does) rock.

    and long after no one cares who billgates was, there will still be Big Blue Iron.

    oh yeah, BSD Lives!
    • Re:SPF/PDF (Score:4, Interesting)

      by kpharmer ( 452893 ) * on Monday April 05, 2004 @09:23PM (#8775542)
      you know, in some ways isp was way ahead of its time.

      I remember back around '90 - 14 years ago that in diaglog manager I could create a complete database table update UI with all crud functionality (create/rename/update/delete) in about 150 lines of simple rexx code. That's with zero reuse.

      Then I encapsulated that, it only required about 40 lines of actual code.

      I've since worked in a lot of java shops and am accustomed to see a thousand lines of code for the same purpose - and everything is written from scratch.

      Well, MVS & ISPF were pretty far behind in some ways, but Rexx & Dialog Manager are still ahead of java & j2ee in some ways. And that's from 15 years ago.
  • Can anyone explain the difference between a mainframe and a supercomputer?
    • by bennomatic ( 691188 ) on Monday April 05, 2004 @07:13PM (#8774541) Homepage
      There are a handful of differences, though many of the definitions overlap.

      The simplest way to think of these two classifications is that
      - "Supercomputer" refers to processing speed and is defined differently in different contexts (i.e. Apple calling its G4 400 a supercomputer because of an outdated US Customs document).
      - "Mainframe" refers to large systems that many users are going to use at the same time, typically via dumb terminal interfaces. Most importantly, mainframes have IO architectures which blow any desktop/workstation out of the water. A good mainframe can be talking to 500 terminals while printing 1000 different bank statements to 100 different high-speed line printers without even breaking a sweat.

      Hope this helps. Any other fun definitions to add?

    • A mainframe is designed for IO, a supercomputer is designed for number crunching. A mainframe doesn't have much more CPU power than a good fast worstation, but the CPU isn't the main part of a mainframe. In fact, data goes in and out of a mainframe and it doesn't ever go through the CPU a lot of the time. The IO devices have incredible bandwidth to each other, and deal with data without the help of the CPU. A supercomputer just processes a data set.
    • by Alien Being ( 18488 ) on Monday April 05, 2004 @07:15PM (#8774568)
      Mainframes:
      General purpose machine.
      Tons of IO bandwidth.
      Substantial processing power.
      Highly redundant and fault tolerant.
      Flexible and scalable architecture.
      Their OSes are very secure and support thousands of users.

      Supercomputer:
      Specialized scientific machine.
      Tons of memory and/or interprocessor bandwidth.
      Loads of processing power, especially vectors.
      IO speed may not be important.
      Redundancy and fault tolerance not as critical as with mainframe.
      Architectures tend to change more frequently.
      OSes not geared for business use.
    • Coming up with a reliable definition for "mainframe" is difficult enough; most people resort to defining them by the OS they run or the vendor that produces them. Short answer: Not all mainframes are supercomputers. Supercomputing generally refers to high-performance computing for lots and lots of number crunching (e.g. scientific applications). A lot of mainframes just hold databases, and focus instead on reliability and availability.
    • by Anonymous Coward
      If the speed is measured in gigaflops, or it looks fancy and new, it's a supercomputer. If it can interface with teletypes, chain printers, reel to reel tape drives, or punchcard readers, it's a mainframe.

      Supercomputers are all about speed. Large size is optional, but it must be able to do at least a billion floating point ops per second.

      Mainframes are always huge, and are all about reliability. They run great, because the current ones were designed in the 1970s, and have had nothing but bug fixes since t
      • by green pizza ( 159161 ) on Monday April 05, 2004 @07:41PM (#8774798) Homepage
        If the speed is measured in gigaflops, or it looks fancy and new, it's a supercomputer. If it can interface with teletypes, chain printers, reel to reel tape drives, or punchcard readers, it's a mainframe...
        Mainframes are always huge, and are all about reliability. They run great, because the current ones were designed in the 1970s, and have had nothing but bug fixes since then.

        A modern IBM S/390 zSeries mainframe may have an overall design from the 1970s, but its individual components (CPUs, I/O controllers, etc), as well as the thruput of the busses is very modern. A recent mainframe could easily benchmark in the multiple gigaflops range of raw performance, but that isn't the point. Mainframes are all about moving important data reliably (and, if possible, fairly fast). A credit card company isn't going to trust a Cray and a scientist isn't going to do his simulations in COBOL on an IBM S/390.
  • 70%? (Score:5, Funny)

    by Nutt ( 106868 ) on Monday April 05, 2004 @07:05PM (#8774469)
    "..noted that 70 percent of the world's data are still housed in mainframe computers."

    They obviously haven't seen my pron collection!
  • Flying Mainframes (Score:4, Informative)

    by computechnica ( 171054 ) <PCGURU@noSpaM.COMPUTECHNICA.com> on Monday April 05, 2004 @07:10PM (#8774523) Homepage Journal
    The most widely used flying command and control platform is the AWACS designed by IBM and Boeing back in the 70s. The USAF,NATO,JDF, and saudi's are all based on the same dual IBM 360 platform (named 4-pi). These mainframes all have been upgraded in memory and converted from tape drives to hard drives. We still develope the software in JOVIAL and assembler.Info [fas.org]
  • by HitScan ( 180399 ) on Monday April 05, 2004 @07:10PM (#8774524)
    4.2 billion dollars? Did they only sell 6 last year? ;)
  • by segfault7375 ( 135849 ) on Monday April 05, 2004 @07:12PM (#8774536)

    The IBM mainframe computer celebrates its 40th birthday this week with a sold-out party at the Computer History Museum

    Yeah, I'll bet that's going to be a real barn burner :)
  • by fm6 ( 162816 ) on Monday April 05, 2004 @07:12PM (#8774538) Homepage Journal
    This is the 40th anniversary of a mainframe: the System 360 [wikipedia.org]. The 360 was a darned important machine (amongst other things, it was the first computer with a byte-addressible memory), but it was hardly the very first mainframe. True computers had been around for about 25 years -- and technically speaking, all computers were mainframes before integrated circuitry made minicomputers and microcomputer feasible.
  • I feel ancient..... (Score:3, Interesting)

    by cbdavis ( 114685 ) on Monday April 05, 2004 @07:14PM (#8774557)
    I saw my first computer in 1966 - a IBM 360/44 ( a mod 40 without MVCL instruction). FORTRAN was the language of choice. I knew where my career was headed. Here I am almost 40 years later.
    Tired of computing and hoping for a less-stressful retirement.

  • Interesting... (Score:5, Interesting)

    by JoeLinux ( 20366 ) <joelinux@gma[ ]com ['il.' in gap]> on Monday April 05, 2004 @07:14PM (#8774558)
    In Woodland Hills, CA, there is a mainframe that contains all the medical records of every event that has ever taken place in the state. (I used to work IT there, and I've seen it...farkin' impressive piece of machinery.)

    They TRIED to convert it to a more conventional system, but they couldn't, due to the fact that no database on earth could handle the sheer number of records.

    Impressive, no?
    • >They TRIED to convert it to a more conventional system

      That's a very odd definition of conventional... :)

      Mainframes are as conventional as you can get.

      They're also old school beasts with more raw data processing power than you can dream of on a pansy PC or ordinary unix-like system.
    • Re:Interesting... (Score:3, Interesting)

      by kpharmer ( 452893 ) *
      > Impressive, no?

      I'm sure it's a nice solution - but the reason for it probably has more to do with reliability than performance.

      On big unix hardware these days (i'm talking large IBM/Sun/HP clusters) databases typically sling *billions* of rows around. The cost / transaction is far lower than a mainframe, and in my experience they are far faster and more scalable.

      However, they simply aren't as reliable either.
  • How many IBM Mainframes does it take to execute a job??

    ..

    four.. Three to hold it down and one to rip it's header off.

  • by Tablizer ( 95088 ) on Monday April 05, 2004 @07:23PM (#8774649) Journal
    The "problem" with mainframes is not so much that they are old, but that most of the applications didn't use relational databases. If the applications used relational databases, then one could much more easily slowly replace COBOL applications with a more pleasant language of implementation in piecemeal.

    Mainframes are Turing Complete, so that any software can be made to run on them if the tools are built. Thus, things like limited-length file names can be transitioned to longer names in a way similar to how Windows allowed one to move to long file names. A mainframe could make an ideal web server because of its security and multi-processing capabilities. If this is the case, then why is it not done often?

    Companies seem to have trouble doing this because of data sharing issues. They must keep using the old data while the conversion takes place to newer conventions. But this would mean having Java and PHP apps accessing data stored in the likes of IMS (navigational) databases. But this would mean one had to keep using IMS even after the conversion. (There are IMS-to-relational translation techniques, but they are hokey for the most part and it is tough to get decent normalization because of the different philosophies.)

    Thus, the "problem" with mainframes is not the hardware, but the database conversion. The live data cannot easily be in two kinds of databases at once.
    • by cdn-programmer ( 468978 ) <<ten.cigolarret> <ta> <rret>> on Monday April 05, 2004 @07:44PM (#8774823)
      I do not agreee with this at all!

      Alternatives to COBOL have existed since the 60's. PL/I is an excellent alternative. It supports literally everything that is any good in COBOL and gets rid of most of the COBOL crapola. The biggest reasons people have not switched is probably because they don't know any better and go with the idea that if it ain't broke, don't fix it.

      As to reltaional databases, well - they are NOT a good alternative for many tasks that run quite well in the mainframes. The fundamental design objective of a relational data base is to expose any and all data to applications. In fact, this is diametrically opposed to what we really need.

      Most data ends up archived at some point and from that point on we need read only access. This is not what relational database systems try to accomplish.

      Another thing the wanna be replacement computers do not have is the Partition Dataset. We probably can build such a beast into Linux using loopback mounts or a variation thereof. But it is going to take a lot of work for reasons I'll describe next.

      A PDS is tied to a set of applications and to a group of users. When you do a loopback mount of a file the system exposes the contents of the file to every user and application in the system. Thus every file in the directory becomes subject to tampering, either inadvertent or deliberate.

      Meanwhile the contents of the PDS can be relied upon in much the same was as the contents of a tarball can be relied apon.

      What this all boils down to is that the mainframe provides capabilities that are not found in alternative systems.

    • by kpharmer ( 452893 ) * on Monday April 05, 2004 @09:44PM (#8775683)
      > The "problem" with mainframes is not so much that they are old, but that most of the applications didn't use relational databases.

      DB2 was the second commercial database out there (following oracle by 1 year around 1983). It's been on mainframes since the very begining. I first starting working on relational databases in 1986 - and it was about 10 years before I finally began to meet a reasonable number of unix or windows developers with database experience. And oh yeah - I've gone through *hundreds* of resumes while hiring back in the 90s, so i'm not hallucinating here.

      > If the applications used relational databases, then one could much more easily slowly replace
      > COBOL applications with a more pleasant language of implementation in piecemeal.

      Hmmm, on an IBM mainframe you've got rexx (like a simple functional version of python), c, java, pl1, etc. All reasonable languages. And they've been there for a while - I wrote C in MVS back around 1992, and Rexx back around 1990. They all talk to DB2. And btw, you can develop good systems with JCL & COBOL it is a little challenging, but not impossible. And I've seen COBOL systems that were far easier to manage and use than their modern counterparts. Of course, much of this has to do with the skill of the developers.

      > A mainframe could make an ideal web server because of its security and multi-processing
      > capabilities. If this is the case, then why is it not done often?

      Reason #1: many mainframe shops are still running the software legacy from the early 90s. They don't have a linux lpar and old-timey protocols aren't ideal for this (often require expensive middleware). But for those organizations that set things up right, there's no real technical limitations.

      Reason #2: simple economics. Web serves are the simplest applications out there - and it's awfully easy to deal with scalability & reliability through redundancy. They're probably the worst candidate for rehosting on a mainframe. Database servers, on the other hand, are good candidates.

      However, in my experience the best candidates for hosting on a mainframe - are database standby servers. You can run a nearly infinite number of the things for nothing - since almost all are idle, and db2/oracle/sybase/postgresql/mysql/etc - all run just fine on suse/whatever linux on vm.
  • by Mr. Piddle ( 567882 ) on Monday April 05, 2004 @07:41PM (#8774790)
    posting sales of $4.2 billion

    So, IBM sold three mainframes. What's the big deal, here?
  • by Nice2Cats ( 557310 ) on Monday April 05, 2004 @07:43PM (#8774815)
    Somebody has to mention the Mock Mainframe Linux Howto [tldp.org], which suggests you change your system following the mainframe philosophy so that you have one big computer and lots of little terminals for small groups of people.

    (I especially like the Willow Rosenberg quote).

  • Back and Forth (Score:4, Interesting)

    by Trolling4Dollars ( 627073 ) on Monday April 05, 2004 @08:20PM (#8775162) Journal
    Once they're gone, they'll be back. From personal experience, I've seen that centralized systems always work better than multiple PCs spread all over the place in terms of reliability. So, I don't think you'll see mainframes go away that quickly AND you'll eventually see them come back. There's just too many benefits, the main one being efficient use of power. I expect that what we'll wind up seeing in the future is a "centralized" system where the OS and the applications and the data are all one entity and the entire network is one big computer.

    Think about it... Back when people used to actually, literally wire programs into old time computers... all that stuff still happens in that box on by your feet or on your desk. Thin abuout how many levels and how much duplication in task there is in a PC:

    You have the microcode at the processor level which is really an analog to programs. But they aren't programs for you, the user. They are programs for the CPU's infrastructure. Then the RAM... It' all over the freakin' place. It's in the CPU, on the motherboard in various places, in your DIMMs, your video card, many periphs, etc... Then you've got the BIOS which is like higher level software compared to the microcode. It's a st of single purpose applications again. But not for you... it's for your hardware. And it interfaces with the OS at some point which (in many cases these days) takes over for the BIOS adding yet another layer of software.

    This time, the software that the OS is, is partially for you and partially for your hardware. If you are strictly speaking kernel-wise, then it's pretty much a bridge between user space apps (shell) and the machine. Then you have your final layer of applications which ARE for you. But will it end there? No... you've got the network protocol stacks. This is the top layer of the multilayered cake that leads to the network.

    But think about it. It's ALL THE SAME THING. Over and over again at different levels with slightly different purposes. So... at some point in time, all these PCs are going to be embedded devices, or wearables, or implants or entities providing even more layers. But when you peel the onion, you're still going to see THE SAME THINGS. Over and over and over again. And on top of all of that, you are going to see the shifts back and forth from centralized to de-centralized and back again. It's part of some cosmic imperative because if you think even eeper you see it mimicked in politics, communications technology (think old time TV vs. satellite vs. over the air digital vs. WLAN based PVRs), and even the automobile vs. mass transportation.

    It's some kind of cosmic rhythm that pulses through the millennia like an ethereal rave...
  • Correction (Score:5, Funny)

    by M.C. Hampster ( 541262 ) <M...C...TheHampster@@@gmail...com> on Monday April 05, 2004 @08:41PM (#8775297) Journal

    "Doug Balog, an IBM vice president, noted that 70 percent of the world's data are still housed in mainframe computers."

    should read:

    "Doug Balog, an IBM vice president, noted that 70 percent of the world's data are still inaccessible and locked up in mainframe computers."
  • by Animats ( 122034 ) on Tuesday April 06, 2004 @02:59AM (#8777422) Homepage
    One of IBM's more enduring products, even though they keep trying to get rid of it, is CICS. CICS, the "customer information control system" is 35 years old this year.

    CICS is a neat idea that deserves a new look. It's a "transaction processing OS". Think of it as an OS whose purpose in life is to run CGI programs efficiently. In its simplest form, each incoming transaction starts up a new program which reads the transaaction, connects to the database, processes the transaction, and exits, typically within a fraction of a second. The operating system is optimized for starting and running those transactions.

    CGI processing under Linux is inefficient, and hacks like mod_perl are needed so that a new process isn't created for each transaction. One could do better. Transaction programs under CICS are started, run up to the point that they need input, and stopped. When a transaction comes in, a copy of the stopped transaction program is forked off, used to run the transaction, and terminated. So there's no way for data to leak between transactions. All transaction programs run in a jail, allowed to talk only to the database and to reply to their incoming message.

    With better OS support for transactions, web servers could have a cleaner, faster interface for their transactions.

  • not all of the mainframes in current use (or which are currently being marketed) are from IBM, or are even based on IBM's mainframe architecture.

    At least two of the top four airlines in the US are still heavily using Unisys mainframes, for example. Those are based on the Sperry UNIVAC 1100-series boxes of the 1960's and 70's (a 36-bit architecture which is word-addressible, not byte-addressible) and an OS called OS2200 (or OS 2200), and many of them are still running applications software that was originally designed and written during that era (though it is constantly being modified in-house, of course).

    As others here have said, mainframes are simply not the old coal-fired boxes that they are sometimes portrayed to be, certainly not on the hardware side of things. What they really are is a centralized server whose design is specialized around very high levels of reliability/recoverability and high levels of data throughput combined with the ability to serve applications to thousands of users with very low levels of system and communications overhead for each user action.

    That makes them exceedingly efficient at what they do, not just large and expensive. :-)

    Also, while most of them tend to have some "stone age" elements on the applications software side, keep in mind that most of the older software tends to be found at the API level, not in the core of the OSes which support that API.

    While application code on those boxes might be very old indeed, or at least based on very old software interfaces, the hardware and software platforms which form the guts of those mainframe boxes have been moving forward over the past few decades just as quickly in many areas as they have been in the desktop and smaller server world.

    Part of the reason that such systems still exist is certainly tied to various economic factors like the difficulty of porting applications and such (when one has several million lines of code which is tightly tied to one's business rules, one doesn't rewrite that software arbitrarily).

    However, some companies still use mainframes for another reason: they have a few applications which simply cannot fail if the company is to operate effectively. In some cases, even a small outage can cause cascading effects thorughout the company and cost the company millions of dollars. Or more.

    My own experience is with major airlines, and they are one of the largest users of such systems in key areas, but financial entities such as NASDAQ have been using similar large systems for years because they need a very high level of reliability and recoverability.

    I really think it's a shame that more people are not exposed to these types of systems in college so they can get some sense of what those machines are actually designed for (and what the hardware and software in those boxes is actually capable of).

    While Unix, Windows, and Mac systems are ubiquitous these days, they simply do not define all existing computing architectures by themselves, nor can they effectively or efficiently handle all types of computing tasks. Not yet, anyway...

BLISS is ignorance.

Working...