Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
IBM Hardware IT

The Mainframe Is Dead! Long Live the Mainframe! 164

HughPickens.com writes The death of the mainframe has been predicted many times over the years but it has prevailed because it has been overhauled time and again. Now Steve Lohr reports that IBM has just released the z13, a new mainframe engineered to cope with the huge volume of data and transactions generated by people using smartphones and tablets. "This is a mainframe for the mobile digital economy," says Tom Rosamilia. "It's a computer for the bow wave of mobile transactions coming our way." IBM claims the z13 mainframe is the first system able to process 2.5 billion transactions a day and has a host of technical improvements over its predecessor, including three times the memory, faster processing and greater data-handling capability. IBM spent $1 billion to develop the z13, and that research generated 500 new patents, including some for encryption intended to improve the security of mobile computing. Much of the new technology is designed for real-time analysis in business. For example, the mainframe system can allow automated fraud prevention while a purchase is being made on a smartphone. Another example would be providing shoppers with personalized offers while they are in a store, by tracking their locations and tapping data on their preferences, mainly from their previous buying patterns at that retailer.

IBM brings out a new mainframe about every three years, and the success of this one is critical to the company's business. Mainframes alone account for only about 3 percent of IBM's sales. But when mainframe-related software, services and storage are included, the business as a whole contributes 25 percent of IBM's revenue and 35 percent of its operating profit. Ronald J. Peri, chief executive of Radixx International was an early advocate in the 1980s of moving off mainframes and onto networks of personal computers. Today Peri is shifting the back-end computing engine in the Radixx data center from a cluster of industry-standard servers to a new IBM mainframe and estimates the total cost of ownership including hardware, software and labor will be 50 percent less with a mainframe. "We kind of rediscovered the mainframe," says Peri.
This discussion has been archived. No new comments can be posted.

The Mainframe Is Dead! Long Live the Mainframe!

Comments Filter:
  • Tao (Score:5, Insightful)

    by phantomfive ( 622387 ) on Thursday January 15, 2015 @04:11AM (#48817757) Journal
    From the Tao of Programming: [mit.edu]

    There was once a programmer who wrote software for personal computers. "Look at how well off I am here," he said to a mainframe programmer who came to visit. "I have my own operating system and file storage device. I do not have to share my resources with anyone. The software is self-consistent and easy-to-use. Why do you not quit your present job and join me here?"

    The mainframe programmer then began to describe his system to his friend, saying, "The mainframe sits like an ancient Sage meditating in the midst of the Data Center. Its disk drives lie end-to- end like a great ocean of machinery. The software is as multifaceted as a diamond, and as convoluted as a primeval jungle. The programs, each unique, move through the system like a swift-flowing river. That is why I am happy where I am."

    The personal computer programmer, upon hearing this, fell silent. But the two programmers remained friends until the end of their days.

    • Re:Tao (Score:5, Funny)

      by Anonymous Coward on Thursday January 15, 2015 @04:25AM (#48817787)

      The tl;dr version:

      PC programmer : "My job is super easy!"
      Mainframe programmer : "Yes. Yes it is."

    • by Shakrai ( 717556 ) on Thursday January 15, 2015 @04:57AM (#48817887) Journal

      .... the more they stay the same. :)

      I keep telling my friends that "cloud computing" is not a new concept. We used to call them "dumb terminals." Not a precise analogy of course but close enough for our purposes. You just know that's going to come full circle in another decade or so.

      • by meta-monkey ( 321000 ) on Thursday January 15, 2015 @10:36AM (#48819151) Journal

        I think more people will start running their own small servers. Cheap storage, always-on internet, dynamic DNS, better software. It's what I do. I have a NAS and OwnCloud and sync all my mobile stuff to that. It's all the benefits of having your data always accessible without the drawbacks of turning over your files to a 3rd party.

        • Your Cloud is not the same cloud we talk about.
          There is a slight difference between 'Cloud Storage' and 'Cloud Computing'.

      • Close enough if you had enough money for a parallel sysplex? :-p I don't doubt that there is a lot of ancient stuff just asking to be rediscovered, but let's not pretend it's meaningfully identical to the notion of redundancy in software replacing expensive hardware.
    • Re:Tao (Score:5, Funny)

      by jandersen ( 462034 ) on Thursday January 15, 2015 @07:23AM (#48818285)

      - and another one, somewhat abridged:

      A Windows admin, a UNIX admin and a Mainfram admin went to the toilet at the same time;

      - the Windows guy finished first, washed his hands and wiped the fingers on a huge wad of paper towels and threw them on the floor, mostly unused

      - The Linux guy washed his hands and carefully dried his hands with 1 paper towel, which he then deposited in the bin

      - The mainframe guy just headed for the door, remarking "I learned long ago not to piss on my fingers".

      • Re:Tao (Score:5, Funny)

        by Anonymous Coward on Thursday January 15, 2015 @07:44AM (#48818351)

        Joke is not realistic due to excessive social interaction.

      • Re:Tao (Score:5, Funny)

        by 16Chapel ( 998683 ) on Thursday January 15, 2015 @08:08AM (#48818417)
        Mac admins don't require toilets any more, they removed the port a few generations back.
        • Re: (Score:3, Funny)

          by nucrash ( 549705 )

          Mac Admins shoved their heads up that port so that everything ugly about them was not exposed to the rest of the world.

      • Re: (Score:2, Funny)

        by drinkypoo ( 153816 )

        - The mainframe guy just headed for the door, remarking "I learned long ago not to piss on my fingers".

        But sadly, he could not escape the doorway, having somehow grown in size while trying to take a crap, and he remained there for all eternity, fixed in place by his massive bulk.

        Most of the jobs formerly done by mainframes are now done by clusters of PCs, like a team of small employees swarming around getting stuff done while that guy is still stuck in the bathroom

        • Re: (Score:3, Informative)

          by Anonymous Coward

          Funny,
          My companies (6000 employee) mainframe has 2 admins - thats all.
          vs
          The Windows Server Team ( UCS, AIX, standalone servers ) has .. 10 ? [they come and go every few years ]
          The storage Team has 2, neither here more than 3 years
          The security team has 5

          1 mainframe w/2 ethernet ports
          vs
          100 physical, 500 VMs, 3 UCS environments ( all the networking infrastructure - Nexus 5Ks & Fexes to connect it all )
          Isilon, Pure, EMC , DataDomain,

          2 vs 17 staff
          2 NICs vs hundreds of ports in a dat

          • Re:Tao (Score:4, Informative)

            by Anonymous Coward on Thursday January 15, 2015 @12:08PM (#48820275)

            Doesn't it depend more what you do with those servers and mainframes instead of how large your company is? I've worked places where the mainframe was used to run decades old code that only had rare changes, and otherwise kept going doing mostly the same thing with minor hardware issues over the years and occasional big deals to make minor API changes. The regular servers on the other hand were always involved in new software, new web services, updates to both looks and functionality exposed to clients, new internal tools, tests of new tools that never became part of standard service, etc.

            I've also been places where a single admin took care of all of both the windows and linux servers, as they were just used for generic office support, with people just needing shared resources and desktop computers that could manage basic terminals, text editors and IDEs. However, since the mainframes involved software undergoing active development, and testing on different systems, there was a whole team of admins keeping things going and dealing with subtle deployment issues, etc.

      • You forgot one minor detail - Windows guy was taking a dump.

      • by pnutjam ( 523990 )
        The windows guy finished first because he just took a dump on the floor in the middle of the restroom, expecting everyone else to work around his pile of s#(^.
    • by gatkinso ( 15975 )

      Then he took a selfie.

  • Are mainframes and PaaS\SaaS really all that different?
    Arent PaaS\SaaS just the next step in mainframes?
    • by Njovich ( 553857 ) on Thursday January 15, 2015 @04:56AM (#48817881)

      From a business point of view they can be similar.

      From the perspective of the mainframe guys, the whole point of a mainframe is that it is a single machine handling all of your transactions. Basically, it is simpler to deal with all kinds of transaction problems when you are not using a vastly distributed system with thousands of nodes. Typically PaaS/SaaS are large distributed systems.

      To reliably and consistantly handle a very large stream of very important transactions where you basically need 100% reliability, they are a real option. The business case for a mainframe is something like, it would cost 200mln per year for some bank to make a failure proof distributed system, and 100mln to do it with a a mainframe. Outside of this type of systems, it is hard to think of any use for a mainframe, given the cost and complexity.

    • by lucm ( 889690 ) on Thursday January 15, 2015 @04:59AM (#48817891)

      No. PaaS is scale-out. while a mainframe is scale-up. A scale-out architecture is good at processing a lot of different requests, but does not offer very good results for high-frequency complex operations because by nature the distribution of workloads over a large network is costly. Anything similar to Newton's method would be a good example of a workload that doesn't translate well on a scale-out architecture.

      I'm not saying that many mainframe applications couldn't be replaced by a cloud computing solution, but there are situations where latency and expensive orchestration are not acceptable.

      • by bluefoxlucid ( 723572 ) on Thursday January 15, 2015 @10:26AM (#48819061) Homepage Journal

        I've seen situations where trying to replace a mainframe with a server ended in bitter failure and hundreds of thousands of dollars of expense. We're talking batch processing millions of records on the Mainframe in a few minutes, while a server managed 30,000 in a day. Sometimes, the mainframe just has better hardware.

        Mainframes are designed to take hardware and software upgrades without interrupting software processes. If we ever implemented Linux's user space APIs on Minix 3 (e.g. kevent, iptables), we could run udev, dbus, and other Linux-specific subsystems on a microkernel; it would be similar to a mainframe, in that you could upgrade the core OS without rebooting, yet dissimilar, in that it wouldn't be a virtualized cluster like OpenMOSIX in which the applications move onto another running OS when you want to reboot one VM.

        Security policies on the mainframe are different than PC, too. High security means each application is so isolated as to effectively run in its own VM, from a practical standpoint. From a technical standpoint, the OS is just so good at confining applications to what they're allowed to do (and those privileges are so well-defined) that it achieves similar isolation to running in separate VMs. This drastically reduces down time. Some effort has gone into Linux on the GrSecurity side to apply kernel write-execute separation; and, again, Minix 3 or a similar OS could create strict memory policies to prevent drivers from accessing kernel RAM not related to the driver and the process invoking it; this plus process groups and containers (as in Linux) and mandatory access control policies would come close, if not parity, a mainframe.

        I have enough understanding to know what must be done to create something, but not how. If I knew how, I'd have long ago added services to Minix 3 to run Linux desktop subsystems for systemd, udev, and dbus; created a policy manager which can define application access policies by contexts, user, and the user's container policy (e.g. Pidgin can access the user's configured $HOME/.pidgin/ and $HOME/download/pidgin/ for read-write, etc.); and modified some of the interfaces to store data relevant only to specific processes in separate pages, and only map those pages in the appropriate context, so that a bug writing all over memory would have limited-scope damage even in kernel (this is hardly ever an issue in Minix to start with). But nay.

      • Ah, my impression was a mainframe runs multiple user sessions processing various different workloads at a time
    • YeeS. They Aare.

  • by Trepidity ( 597 ) <[gro.hsikcah] [ta] [todhsals-muiriled]> on Thursday January 15, 2015 @04:25AM (#48817793)

    The IBM pricing really is quite high (there are a ton of licensing fees for the hardware, maintenance, and software). But the systems work reliably. You get a giant system that can run a whole lot of VMs, with fast and reliable interconnects, transparent hardware failover (e.g. CPUs inside most mainframes come in redundant pairs), etc. To get a similar setup on commodity hardware you need some kind of "cloud" orchestration environment, like OpenStack, which can deal with VM management and migration, network storage, communication topology, etc. The advantage of an x86-64/OpenStack cluster solution is that the hardware+licensing costs are loads cheaper, and you don't have IBM levels of vendor lockin. The disadvantage is that it doesn't really work reliably; you're not going to get 5 9s of uptime on any significantly sized OpenStack deployment, and it will require an army of devops people to babysit it. The application complexity also tends to be higher, because failures are handled at the application level rather than at the system level: all your services need to be able to deal with non-transparent failover, split-brain scenarios, etc. Also the I/O interconnects between parts of the system (even if you're on 10GigE) are much worse than mainframe interconnects.

    • by lucm ( 889690 ) on Thursday January 15, 2015 @05:08AM (#48817915)

      Besides the price, I'm always on the fence regarding IBM's approach to licensing. On one hand it feels like having an itemized bill with individual licenses and fees for everything down to individual screws gives more control to the buyer (as opposed to a "bundle" where one could feel like he's paying for stuff he doesn't need), but in my experience it's almost impossible to seriously weed out (or even understand) items from the list.

      My best billing experience has been in a small business that was using Dell's financing. No big upfront cost, a simple monthly amount to pay. Need one more server or ten more workstations? No problem, the stuff is delivered and the monthly amount is increased by $200. Awesome.

    • The licensing model on an IBM mainframe is different depending on what OS you run (and which CPU you use). In my experience, the really prohibitively expensive model is when you run z/OS + certain 3rd party packages, because they tend to charge you for number of seats, CPU cycles, etc etc etc. I think one of the reasons why IBM went down the linux and Java routes was exactly that - it appears they can't easily move away from the old licensing model on z/OS, but you can run both Linux and Java really cheaply

      • by Trepidity ( 597 )

        And what you say about mainframe stability vs other HW - I don't think this is entirely true.

        If you mean just individual servers, I agree, regular HW isn't too bad, and doesn't take a ton of admin power. I was thinking more of the case of replacing a big mainframe with a cluster, which has a whole different kind of administrative overhead. You can generally assume that a mainframe stays internally connected and working: CPU cards don't randomly lose connections to each other, your database and application s

  • by Hognoxious ( 631665 ) on Thursday January 15, 2015 @04:30AM (#48817813) Homepage Journal

    IBM dude: It might look like a mainframe, but it's a high-capacity, legacy-compatible, fault-tolerant application server.

    Me: What's the difference?

    IBM dude: About 200 grand.

    • by lucm ( 889690 ) on Thursday January 15, 2015 @05:20AM (#48817941)

      Have you seen those beasts? They come with earthquake kits (hydraulic suspension, gyros, etc), waterproof cables connectors (to keep working in a small flood) and nitrogen-rich fire-resistant enclosures. Drives are snapped in a backplane because loose cables are a liability, and IBM even provides an optimal distribution of redundant components inside the case based on their extensive records of hardware failures experienced by all their large customers in the last 20 years (because of course those machines are not serviced by the customers themselves).

      This kind of big iron is definitely not a pimped pizza box. It is an amazing piece of engineering. Loud, expensive, inflexible, but truly amazing.

      • waterproof cables connectors

        Of course they have waterproof cable connectors . . . because the things are liquid cooled! Do you think that liquid cooled computers were only for games?

        If you get a chance to visit the IBM lab in Böblingen in Germany, they have some mainframes with plexiglass casings. The first thing that customer executives ask when they seem them, are, "Is that all that there is in them . . . !"

        The next question is about the liquid tubes inside them. And then you need to tell the executives that the Internet

        • Erm? Why the hate?
          Programming on a mainframe is much difference than 'ordinary' programming.
          And there are still plenty more mainframes around than just IBMs ...

      • Have you seen those beasts?

        Yes. My first "proper" job was programming on the bastards. We weren't normally allowed in the machine room but we got a tour on our first day.

        This kind of big iron is definitely not a pimped pizza box.

        Not quite sure what that's got to do with anything I wrote, but never mind.

      • by swamp boy ( 151038 ) on Thursday January 15, 2015 @09:55AM (#48818869)

        I disagree that they're inflexible. Capacity On Demand (COD) gives customers ability to pay for additional capacity (engine/CPU) only while that additional capacity is turned on. A mainframe is typically partitioned into LPARs at bare metal using PR/SM. You can add/remove/rearrange LPAR configurations. Then there is z/VM -- this is IBM's software crown jewel of mainframe software. This is where most shops run virtualized Linux servers. Create, start, stop, reconfigure guests as needed. You can reallocate storage among guests very easily. You can create software-only (virtual) networks for the guests. IBM's latest version of z/VM supports OpenStack. Many other features, these are just the main ones that come to mind. I don't call this inflexible.

        • Then there is z/VM -- this is IBM's software crown jewel of mainframe software. This is where most shops run virtualized Linux servers. Create, start, stop, reconfigure guests as needed. You can reallocate storage among guests very easily. You can create software-only (virtual) networks for the guests.

          Sounds like a wicked expensive VCenter cluster.
        • Capacity On Demand, to a business guy, means you can buy additional capacity only when you need it, thus saving money. To a geek, it means that they deliberately cripple the system unless you pay them more money. I'm a geek.

          • by bws111 ( 1216812 )

            And what, exactly, is the problem with that?

          • by iamacat ( 583406 )

            To a finances guy, it means you pay for all your hardware and its power use, but have access only to a subset of it most of the time. Inactive hardware costs exactly same as active hardware to develop and manufacture. You are paying for a "discount" on spare capacity with jacked up prices for active capacity.

            • by bws111 ( 1216812 )

              You think idle cores take the same power as active cores? You think software costs the same when running on one machine as it does on a machine with the 446 times the performance? You think a finance guy wants to pay for the development costs of a 141 processor machine running at 5GHz when all he needs is a 5 processor machine running at 3GHz? Or maybe you think that if IBM wants to be able to offer machines of different capacities the actual hardware should be different, and by some unfathomable miracle

              • by iamacat ( 583406 )

                Of course! A physical 5 processor machine running at 3Ghz would cost peanuts in hardware and power compared to an intentionally crippled 141 processor mainframe. What really happens is you pay (cost - subsidy) for the crippled machine and you or someone else eventually pays (cost + subsidy) for the less crippled machine.

                With software, the equation is better. If someone sells you an edition limited to certain number of concurrent transactions, there are no manufacturing costs for your copy that have to be pa

  • by SeaFox ( 739806 ) on Thursday January 15, 2015 @04:34AM (#48817831)

    For example, the mainframe system can allow automated fraud prevention while a purchase is being made on a smartphone.

    Because that's so much different than preventing fraud on a purchase being made from a desktop PC.

    • No kidding. The only thing I can think of is that the interconnects and processing power of the mainframe allows more heuristics to be run on people's purchasing patterns. Odd pattern = fraud possibility.

      Thing is, right now they often consider the individual's purchasing patterns. What about if a whole lot of people start buying from one company? Different pattern to be spotted, can still be fraud.

    • by bws111 ( 1216812 )

      The difference is in the volume of transactions. You don't find many people purchasing their morning coffee (for instance) with a PC.

    • by JBMcB ( 73720 )

      Actually it is. If you're buying something on a PC you're probably going through a vendor's website, like Amazon or the Apple Store. They are handling most of the transaction cost, CPU-wise, probably offloading some of it to a 3rd party at some point. It's an asynchronous operation - you're credit card gets billed within a few minutes, you're receipt comes a little bit after that. You have some time to do the credit history checks, credit card purchasing history, whatever else needs to take place for fraud

      • by tlhIngan ( 30335 )

        When you're paying with something like ApplePay, it's all instantaneous. Your credit card gets billed immediately. This is how it has always worked with credit cards, but the back-channels are different. The idea is, more people are going to be using phones for transactions, the possibility of fraud goes up, and you need more instantaneous checking for potential fraud.

        Actually, Apple Pay is just a fancy credit card.

        The Apple part is only involved in setting it up - your phone talks to Apple who talks to you

  • they are dying (Score:5, Informative)

    by Anonymous Coward on Thursday January 15, 2015 @05:35AM (#48817987)

    The death of IBM's mainframes is happening. It was never going to be an overnight thing though. We just replaced our 2 IBM mainframes which cost us just over 10 million each plus licensing and maintenance costs each year with around 2 million of intel based servers. Yes each of those boxes is almost a little mainframe in itself with 80 cores per machine and 4TB of memory, but they run at a fraction of the cost (with more total processing power than the mainframes they replaced) even when providing full cold standby redundancy. There are 3 other places in town that I know that also run mainframes, 1 has 6 of them all of which they have a 10 year plan to phase them out, another has 2 which will be gone by the end of 2016 and the last is the only hold out in town which is waiting to see how our replacement has gone (so far 6 months in and they are happy, another 12 months and the mainframes will be completely turned off).

    • Re:they are dying (Score:5, Insightful)

      by Trepidity ( 597 ) <[gro.hsikcah] [ta] [todhsals-muiriled]> on Thursday January 15, 2015 @06:15AM (#48818115)

      Thanks, that's an interesting comment. Especially with x86 servers getting fairly big these days (the 80-core, 4TB-ram monsters you mention), I can see that being plausible for some scenarios. Are all the services you ran previously each able to fit in a single x86 server now? If so that sounds like it'd greatly ease migration. One of the big pain-points of migration from mainframes to x86 clusters has traditionally been that it's hugely expensive to re-architect complex software so that it will run (and run reliably) on a distributed system, if it was originally written to run on a single system. But if the biggest single service you run is small enough to fit in your biggest x86 box, then you don't have to do the distributed-system rewrite.

      • Re:they are dying (Score:5, Interesting)

        by Anonymous Coward on Thursday January 15, 2015 @06:37AM (#48818193)

        I'd be lying if I said it was easy and the architecture we have for the moment (temporary) is terrible as we basically did a giant recompile for the majority of the cobol code and have it running on a single server (though it uses at least 70 cores of that capacity for most of the day), long term it will be recoded with part of the savings from the mainframe decommissioning used to re-architect it to be a more scale out rather than scale up design. we had to extract all the batch, nightly analytics, archiving, forensics and various other processes on to separate machines recode some processes to be more parallel. 2 year project all up (plus a great deal of planning before that). it actually runs on 6 of those monster specced machines, only one of them is really ever stressed though, the rest are there for redundancy, testing and keeping a lot of the miscellaneous tasks off the core machine.

        • You're at $2M for hardware that won't run reliably for 20 straight years. Add to that the cost of engineer time doing the rearchitecture--across two years? How many people are involved, and what's their time per week? How often will you replace your servers, versus the mainframes?
          • That's why you buy multiple with hot and/or cold standby and you can afford to replace them much more regularly. 2 million provide multiple machines of that spec not one. good design and architecture allow for failures, if you plan for the hardware to fail this isn't a problem and can be a much cheaper approach. Not to mention Mainframes fail as well. we had 3 hardware failures that brought our mainframe down over the past 5 years.

        • Re:they are dying (Score:5, Insightful)

          by bws111 ( 1216812 ) on Thursday January 15, 2015 @10:40AM (#48819201)

          This is starting to sound more and more like bullshit. Were all of your cobol programs completely self-contained (highly unlikely). You didn't use any CICS, IMS, database, or any other middleware? You didn't use any VSAM datasets or any record IO? You didn't have any dependancies on JCL associating 'files' to 'datasets' and specifying how files should be opened and what should happen when they are closed?

    • Re: (Score:2, Insightful)

      What did you pay to replace the software and test the new version?

      Oh, and what is the name of the place? I don't want to have an account there for at least 10 years.
      • most likely you already have accounts with places that have replaced mainframes and don't even know it, most don't like to publicise it due to bias and ignorance like yours. When we were looking at our mainframe replacement we spoke to banks, government departments and large insurance companies all over the world who had all made the transition but to discuss it every last one of them demanded NDA's, especially the banks. Partly it is they didn't want the competition to be aware of what they are doing, but

  • Look to the future and imagine the internet of things ... yet all those smart devices relay on the big data consuming services of their proprietors.
    Why are mainframes so hard to replace you ask yourself. Versatility is my answer.
    Sure now, how can a machine weighing tonnes be agile and able to adapt to an ever changing world?
    The answer is within that ever changing world itself, ask yourself what is a mainframe exactly?
    I see mainframes as ultra concentrated bulks of technology, not measured in mips or dry-sto

  • As long as things like Terminal Services and Citrix exist, the "mainframe" as you know it won't ever die.
    • I'm confused, are you saying that because Citrix and Terminal Services exist in the Windows world, mainframes will never die because the alternative of dealing with Citrix/TS is so unpalatable?
  • by AchilleTalon ( 540925 ) on Thursday January 15, 2015 @06:48AM (#48818223) Homepage

    "We kind of rediscovered the mainframe," says Peri.

    Everytime IBM announced a new mainframe line of products they hired an "external" consultant to say exactly the same bullshit. Since 1990, they announced the rediscovery of the mainframe at least five times. IBM is addicted to the mainframe given the large chunk of revenues associated to it. So, they serve us the same marketing bullshit each time.

  • by CaptainOfSpray ( 1229754 ) on Thursday January 15, 2015 @07:05AM (#48818249)
    I found this in the Overview of the Announcement Letter

    "The name change serves to signal ... the role of the mainframe in the new digital era of IT."

    Us old farts are envious of the new digital mainframes - we were seriously handicapped back then, working on all those old analog mainframes.

    It isn't that mainframes are eternal, it's that marketing wonks who write this sort of stuff are allowed to breed...
  • IBM releases the Z9M9Z...

  • I had been out of (what I used to know) the mainframe scene for a long while. The zOS was great an everybody loved the mainframe with the exception of the financial department. It was because of the "Monthly License Charge" that some Mainframe models used to have, the software was never licensed to you, it was "rented", so if you didn't pay the MLC you must disconnect the mainframe. Is the MLC over? Does anybody knows?
    • MLC is very much alive, and just as problematic and expensive as ever.

      I love the platform operationally. The licensing sucks in every way.
  • I think "prevailed" is a bit overstating things. Mainframes have more "held on" despite the march of the killer micros.

    • nonsense, there is plenty of work that those micros can't do, upon which modern civilization absolutely depends. There is no substitute. No one who has done serious financial IT work ever believed the mainframe was dying or going away.

      • Nobody expects a single desktop PC to do the job of a mainframe. It's true, too, that mainframes have their place for certain types of work. Don't claim, though, that a rack full of blades can't be clustered to do a similar job. It happens all the time.

        • No, rackfull of blades can't handle the hundreds of IO channels nor have 10TB common RAM that this mainframe could.

  • Big government agencies like SS and IRS with legacy software and large client base?
    • Most financial or transactional institutions. A list of the top of my head is Visa, Mastercard, AMEX, Wells Fargo, Delta Airlines, Nationwide Insurance. Also most regional banks and credit unions also run systems based on the z platform.
  • No matter how reliable and maintainable the box itself is, I wouldn't want my business to lose days or weeks of revenue because the datacenter was swallowed by a sinkhole. Having hundreds of cloud compute instances around the world also helps compensate for network latencies and quickly cut expenses during a business downturn.

    I am sure there is a scale at which it makes sense to have dozens of these boxes rather than many thousands of separate instances. Just not sure if volume is enough for IBM to recoup t

    • > swallowed by a sinkhole

      Or cratered by an asteroid!

    • by bws111 ( 1216812 )

      That is why there is GDPS (Geographically Dispersed Parallel Sysplex). Have you ever heard of a bank or credit card company or reservation system losing 'days or weeks' (or even seconds) of data due to a datacenter outage? Do you think that is because by some miracle there has never been a datacenter outage?

      Honestly, you guys really need to learn what modern mainframes are and what they do before you go spouting off.

      • by iamacat ( 583406 )

        Hence the point that you should not buy those unless you can afford a dozen. If I only need the power of one of those, I would be better off purchasing less powerful/cheaper systems to distribute worldwide.

        • by bws111 ( 1216812 )

          Why do you need 'a dozen'? You need 2 for redundancy. And they don't even need to be the same. The largest machine has 446 times the capacity of the smallest. Your primary datacenter is configured for your main workload. Your backup datacenter can be the smallest capacity machine with enough CBU engines to cover the workload of the larger machine. CBU engines cost a fraction of a regular engine. When a disaster happens, you activate the CBU engines and the workload is transferred to the backup datace

          • by iamacat ( 583406 )

            You are going to serve your Chinese customers from US datacenters? Hehe. Connectivity between world regions is glitchy and high latency. Amazon, for example, provides 9 regions for its compute instances and they don't spend money for these datacenters just for the heck of it.

            Even within a region, you are proposing paying for network and other equipment to handle 100% of peak traffic in each datacenter, while one sits idle most of the time. It would be much cheaper, and provide lower latency, to have 3 datac

  • "Mainframe declared dead, film at 11".

    And within a year or two, IBM announces that they're shipping more mainframes that year than they've ever sold before.

    Datapoint: around 2001 or so, some crazy at IBM, using VM (IBM tech first developed in the seventies), maxed out a good-sized mainframe... running 48,000 *seperarate* instances of Linux, and it ran happy as a clam with "only" 32,000.

    How many VMs you got on your server?

    *I* have a nice toolbox in my head, with hammers and wrenches and screwdrivers, on how to program on everything from MS DOS to mainframes to Linux. I also know how to admin all but the mainframe, but know something of that. What, Ah say, what do you have to compare, Boy? One ballpeen hammer that only works in Windows?

                        mark, who prefers to use the right tool for the job

  • Who didn't think mainframes were like the epitome of cool back in the 80s/90s? Who doesn't, now, want one of those massive computer-as-art Cray installations, with the comfy couch and the processor coolant trickling across some sculpture that served as eye candy as well as a radiator, and the subtle blinkenlights flashing away, seemingly at random? And then came the Beowulf.

For God's sake, stop researching for a while and begin to think!

Working...