The Mainframe Is Dead! Long Live the Mainframe! 164
HughPickens.com writes The death of the mainframe has been predicted many times over the years but it has prevailed because it has been overhauled time and again. Now Steve Lohr reports that IBM has just released the z13, a new mainframe engineered to cope with the huge volume of data and transactions generated by people using smartphones and tablets. "This is a mainframe for the mobile digital economy," says Tom Rosamilia. "It's a computer for the bow wave of mobile transactions coming our way." IBM claims the z13 mainframe is the first system able to process 2.5 billion transactions a day and has a host of technical improvements over its predecessor, including three times the memory, faster processing and greater data-handling capability. IBM spent $1 billion to develop the z13, and that research generated 500 new patents, including some for encryption intended to improve the security of mobile computing. Much of the new technology is designed for real-time analysis in business. For example, the mainframe system can allow automated fraud prevention while a purchase is being made on a smartphone. Another example would be providing shoppers with personalized offers while they are in a store, by tracking their locations and tapping data on their preferences, mainly from their previous buying patterns at that retailer.
IBM brings out a new mainframe about every three years, and the success of this one is critical to the company's business. Mainframes alone account for only about 3 percent of IBM's sales. But when mainframe-related software, services and storage are included, the business as a whole contributes 25 percent of IBM's revenue and 35 percent of its operating profit. Ronald J. Peri, chief executive of Radixx International was an early advocate in the 1980s of moving off mainframes and onto networks of personal computers. Today Peri is shifting the back-end computing engine in the Radixx data center from a cluster of industry-standard servers to a new IBM mainframe and estimates the total cost of ownership including hardware, software and labor will be 50 percent less with a mainframe. "We kind of rediscovered the mainframe," says Peri.
IBM brings out a new mainframe about every three years, and the success of this one is critical to the company's business. Mainframes alone account for only about 3 percent of IBM's sales. But when mainframe-related software, services and storage are included, the business as a whole contributes 25 percent of IBM's revenue and 35 percent of its operating profit. Ronald J. Peri, chief executive of Radixx International was an early advocate in the 1980s of moving off mainframes and onto networks of personal computers. Today Peri is shifting the back-end computing engine in the Radixx data center from a cluster of industry-standard servers to a new IBM mainframe and estimates the total cost of ownership including hardware, software and labor will be 50 percent less with a mainframe. "We kind of rediscovered the mainframe," says Peri.
Tao (Score:5, Insightful)
There was once a programmer who wrote software for personal computers. "Look at how well off I am here," he said to a mainframe programmer who came to visit. "I have my own operating system and file storage device. I do not have to share my resources with anyone. The software is self-consistent and easy-to-use. Why do you not quit your present job and join me here?"
The mainframe programmer then began to describe his system to his friend, saying, "The mainframe sits like an ancient Sage meditating in the midst of the Data Center. Its disk drives lie end-to- end like a great ocean of machinery. The software is as multifaceted as a diamond, and as convoluted as a primeval jungle. The programs, each unique, move through the system like a swift-flowing river. That is why I am happy where I am."
The personal computer programmer, upon hearing this, fell silent. But the two programmers remained friends until the end of their days.
Re:Tao (Score:5, Funny)
The tl;dr version:
PC programmer : "My job is super easy!"
Mainframe programmer : "Yes. Yes it is."
The More Things Change.... (Score:5, Insightful)
.... the more they stay the same. :)
I keep telling my friends that "cloud computing" is not a new concept. We used to call them "dumb terminals." Not a precise analogy of course but close enough for our purposes. You just know that's going to come full circle in another decade or so.
Re:The More Things Change.... (Score:4, Interesting)
I think more people will start running their own small servers. Cheap storage, always-on internet, dynamic DNS, better software. It's what I do. I have a NAS and OwnCloud and sync all my mobile stuff to that. It's all the benefits of having your data always accessible without the drawbacks of turning over your files to a 3rd party.
Re: (Score:2)
Your Cloud is not the same cloud we talk about.
There is a slight difference between 'Cloud Storage' and 'Cloud Computing'.
Re: (Score:2)
Re:Tao (Score:5, Funny)
- and another one, somewhat abridged:
A Windows admin, a UNIX admin and a Mainfram admin went to the toilet at the same time;
- the Windows guy finished first, washed his hands and wiped the fingers on a huge wad of paper towels and threw them on the floor, mostly unused
- The Linux guy washed his hands and carefully dried his hands with 1 paper towel, which he then deposited in the bin
- The mainframe guy just headed for the door, remarking "I learned long ago not to piss on my fingers".
Re:Tao (Score:5, Funny)
Joke is not realistic due to excessive social interaction.
Re:Tao (Score:5, Funny)
Re: (Score:3, Funny)
Mac Admins shoved their heads up that port so that everything ugly about them was not exposed to the rest of the world.
Re: (Score:2, Funny)
- The mainframe guy just headed for the door, remarking "I learned long ago not to piss on my fingers".
But sadly, he could not escape the doorway, having somehow grown in size while trying to take a crap, and he remained there for all eternity, fixed in place by his massive bulk.
Most of the jobs formerly done by mainframes are now done by clusters of PCs, like a team of small employees swarming around getting stuff done while that guy is still stuck in the bathroom
Re: (Score:3, Informative)
Funny, .. 10 ? [they come and go every few years ]
My companies (6000 employee) mainframe has 2 admins - thats all.
vs
The Windows Server Team ( UCS, AIX, standalone servers ) has
The storage Team has 2, neither here more than 3 years
The security team has 5
1 mainframe w/2 ethernet ports
vs
100 physical, 500 VMs, 3 UCS environments ( all the networking infrastructure - Nexus 5Ks & Fexes to connect it all )
Isilon, Pure, EMC , DataDomain,
2 vs 17 staff
2 NICs vs hundreds of ports in a dat
Re:Tao (Score:4, Informative)
Doesn't it depend more what you do with those servers and mainframes instead of how large your company is? I've worked places where the mainframe was used to run decades old code that only had rare changes, and otherwise kept going doing mostly the same thing with minor hardware issues over the years and occasional big deals to make minor API changes. The regular servers on the other hand were always involved in new software, new web services, updates to both looks and functionality exposed to clients, new internal tools, tests of new tools that never became part of standard service, etc.
I've also been places where a single admin took care of all of both the windows and linux servers, as they were just used for generic office support, with people just needing shared resources and desktop computers that could manage basic terminals, text editors and IDEs. However, since the mainframes involved software undergoing active development, and testing on different systems, there was a whole team of admins keeping things going and dealing with subtle deployment issues, etc.
Re: (Score:2)
You forgot one minor detail - Windows guy was taking a dump.
Re: (Score:2)
Re: (Score:3)
It's not the piss; that's sterile. It's the zoo of microbes crawling all over your dick
For most men anyhow, they keep their dick in clean cotton underwear, but touch many things with their hands. Typically, your hands get your dick dirty, not the other way around. Didn't Penn and Teller do a "Bullshit!" on this? Your ass has less bacteria than your hands.
It's good to wash your hands on the way out of the bathroom just because it's good to wash your hands occasionally, and hey there's a sink there, how convenient.
Re: (Score:2)
Re: (Score:2)
Then he took a selfie.
Mainframe vs PaaS and SaaS (Score:2)
Arent PaaS\SaaS just the next step in mainframes?
Re:Mainframe vs PaaS and SaaS (Score:5, Insightful)
From a business point of view they can be similar.
From the perspective of the mainframe guys, the whole point of a mainframe is that it is a single machine handling all of your transactions. Basically, it is simpler to deal with all kinds of transaction problems when you are not using a vastly distributed system with thousands of nodes. Typically PaaS/SaaS are large distributed systems.
To reliably and consistantly handle a very large stream of very important transactions where you basically need 100% reliability, they are a real option. The business case for a mainframe is something like, it would cost 200mln per year for some bank to make a failure proof distributed system, and 100mln to do it with a a mainframe. Outside of this type of systems, it is hard to think of any use for a mainframe, given the cost and complexity.
Re:Mainframe vs PaaS and SaaS (Score:5, Interesting)
And yet, if you open up a mainframe, you will see that on the inside, it is exactly a vastly distributed system with thousands of nodes.
No it isn't. Even this latest monster doesn't have that many actual processors in it.
The main advantage to a mainframe is its ability to shovel around vast amounts of data very rapidly. IBM has offloaded a lot of the I/O work onto the peripheral data controllers ever since the System/360.
Technologically, mainframes are lagging. This is the first IBM mainframe that has had the ability to run multiple instructions at once on a single core the way Intel chips have done for many years now. The processor clock speed isn't anything outstanding for the day, either.
It really sounds a lot like the beginning of the end. There's a lot of interesting stuff that IBM has done to their big iron systems over the years, but it's pretty much stuff that didn't transport outside their own little world. Within that world, you have all sorts of interconnects, and one should never underestimate the benefits of having One Big Box when it comes to power consumption and real-estate needs, especially since IBM's reputation has always been that just because you have One Big Box doesn't mean Single-Point failure.
But in the end, I think they may simply fade away. There's no cheap way to get into the mainframe business, The closest thing to open-source is the Hercules emulator, but the licensing fees for any IBM OS release past 1986 mean that small businesses cannot leverage it. There are all sorts of specialized skills required that are no longer dime-a-dozen. Most software products and systems that run on mainframes have counterparts that run on commodity hardware and OS's - often cheaper, and considering what IBM has done to their workforce, often better supported.
So if you have lots of legacy code to support, or are willing to dedicate a lot of expensive resources to a totally-packaged system, this new box may be wonderful for you. For the computing world at large, it's likely to be hardly noticed.
Re: (Score:2)
The main advantage to a mainframe is its ability to shovel around vast amounts of data very rapidly. IBM has offloaded a lot of the I/O work onto the peripheral data controllers ever since the System/360.
HTH: In this case, the "peripheral data controllers" are the nodes, which lie on a network. HAND.
Re: (Score:2)
HTH: In this case, the "peripheral data controllers" are the nodes, which lie on a network. HAND.
No, I meant Channel Processors, which were stock back when networks were expensive add-ons.
Depending on the mainframe model, a channel processor might be microcode, custom hardware, or even a microprocessor.
Re:Mainframe vs PaaS and SaaS (Score:5, Informative)
And yet, if you open up a mainframe, you will see that on the inside, it is exactly a vastly distributed system with thousands of nodes.
No it isn't. Even this latest monster doesn't have that many actual processors in it.
The main advantage to a mainframe is its ability to shovel around vast amounts of data very rapidly. IBM has offloaded a lot of the I/O work onto the peripheral data controllers ever since the System/360.
Technologically, mainframes are lagging. This is the first IBM mainframe that has had the ability to run multiple instructions at once on a single core the way Intel chips have done for many years now. The processor clock speed isn't anything outstanding for the day, either.
Eh... No.
The Z196 processor (2010) implements superscalarity (5 wide, 3 decode) and out of order execution at 5.2GHz.
The Z12 processor (2012) have 7 wide execution and runs at 5.5GHz.
They are top of the line products.
Re:Mainframe vs PaaS and SaaS (Score:4, Funny)
Re: (Score:2)
This is the first IBM mainframe that has had the ability to run multiple instructions at once on a single core the way Intel chips have done for many years now.
Eh... No. The Z196 processor (2010) implements superscalarity (5 wide, 3 decode) and out of order execution at 5.2GHz. The Z12 processor (2012) have 7 wide execution and runs at 5.5GHz. They are top of the line products.
Perhaps they were referring to multithreading (which I think is new in the z13, although I read something about IBM having experimented decades ago with a 2-way-threaded variant of the System/360 Model 195) rather than to superscalarity. (Presumably they weren't referring to chip multiprocessing, either, as the z13 isn't the first multi-core z/Architecture chip.)
Re:Mainframe vs PaaS and SaaS (Score:4, Interesting)
The reason that this is the first mainframe with SMT is simple. Prior to the previous generation (z12), most mainframe workload was z/OS, and z/OS has no support for SMT. Starting with z12, a whole lot of mainframes started being used for new workload (Linux). Now it makes sense to add SMT, so they did. It has nothing to do with 'technologically lagging'.
As for clock speed being 'not outstanding', looking around Intel's site for server chips I don't see anything clocked above 3.4GHz. This new mainframe runs at 5GHz (previous generation was 5.5GHz, but the new one is still faster).
The 'cheap way' to get into the mainframe business is Linux, and many companies are doing it.
The reasons customers are running Linux on mainframes is for the same reasons they run anything on mainframes. In many cases, it is just a better value. The 'legacy is the only reason for mainframes' mantra is really old and tired, and is only repeated by people who know very little about mainframes.
Re: Mainframe vs PaaS and SaaS (Score:2)
This article on SMT and the mainframe [ibmsystemsmag.com] mentions that it has the potential to complicate billing...
Re: (Score:2)
The reason that this is the first mainframe with SMT is simple. Prior to the previous generation (z12), most mainframe workload was z/OS, and z/OS has no support for SMT.
Presumably by "support for SMT" you mean "understanding that you don't have n processor cores with their own CPU resources, you have n/T processor cores, each of which can run T streams of code at once sharing some of those resources", so that the scheduler might not treat all entities capable of running streams of code the same.
I don't know whether z13's SMT manifests itself as each core looking like two processors, but I have the impression that other chips that implement T-way SMT look mostly like chips
Re: (Score:2)
The link in the post above yours provides a good discussion. What you say is correct, it appears as 2 processors. However it does not provide the performance of 2 processors, but has about a 40% increase over 1 processor. This means that neither thread is running at full speed. Much mainframe workload depends on fast uniprocessing, so slowing down a thread by using SMT is not desirable in those situations. Therefore, zOS would have to specifically allow certain jobs to use SMT.
Re: (Score:2)
The link in the post above yours provides a good discussion. What you say is correct, it appears as 2 processors. However it does not provide the performance of 2 processors, but has about a 40% increase over 1 processor. This means that neither thread is running at full speed. Much mainframe workload depends on fast uniprocessing, so slowing down a thread by using SMT is not desirable in those situations. Therefore, zOS would have to specifically allow certain jobs to use SMT.
So that, for example, tasks within a job with SMT enabled could be scheduled with two tasks running on the same core as separate threads, rather than running on different cores?
(In the synchronicity department, the quote that popped up on /. when I went to your posting was "Your mode of life will be changed to EBCDIC.")
Re: (Score:2)
You are ignorant, go read the specs on the chipsets used, Intel's best are inferior
Re:Mainframe vs PaaS and SaaS (Score:5, Insightful)
No. PaaS is scale-out. while a mainframe is scale-up. A scale-out architecture is good at processing a lot of different requests, but does not offer very good results for high-frequency complex operations because by nature the distribution of workloads over a large network is costly. Anything similar to Newton's method would be a good example of a workload that doesn't translate well on a scale-out architecture.
I'm not saying that many mainframe applications couldn't be replaced by a cloud computing solution, but there are situations where latency and expensive orchestration are not acceptable.
Re:Mainframe vs PaaS and SaaS (Score:4, Interesting)
I've seen situations where trying to replace a mainframe with a server ended in bitter failure and hundreds of thousands of dollars of expense. We're talking batch processing millions of records on the Mainframe in a few minutes, while a server managed 30,000 in a day. Sometimes, the mainframe just has better hardware.
Mainframes are designed to take hardware and software upgrades without interrupting software processes. If we ever implemented Linux's user space APIs on Minix 3 (e.g. kevent, iptables), we could run udev, dbus, and other Linux-specific subsystems on a microkernel; it would be similar to a mainframe, in that you could upgrade the core OS without rebooting, yet dissimilar, in that it wouldn't be a virtualized cluster like OpenMOSIX in which the applications move onto another running OS when you want to reboot one VM.
Security policies on the mainframe are different than PC, too. High security means each application is so isolated as to effectively run in its own VM, from a practical standpoint. From a technical standpoint, the OS is just so good at confining applications to what they're allowed to do (and those privileges are so well-defined) that it achieves similar isolation to running in separate VMs. This drastically reduces down time. Some effort has gone into Linux on the GrSecurity side to apply kernel write-execute separation; and, again, Minix 3 or a similar OS could create strict memory policies to prevent drivers from accessing kernel RAM not related to the driver and the process invoking it; this plus process groups and containers (as in Linux) and mandatory access control policies would come close, if not parity, a mainframe.
I have enough understanding to know what must be done to create something, but not how. If I knew how, I'd have long ago added services to Minix 3 to run Linux desktop subsystems for systemd, udev, and dbus; created a policy manager which can define application access policies by contexts, user, and the user's container policy (e.g. Pidgin can access the user's configured $HOME/.pidgin/ and $HOME/download/pidgin/ for read-write, etc.); and modified some of the interfaces to store data relevant only to specific processes in separate pages, and only map those pages in the appropriate context, so that a bug writing all over memory would have limited-scope damage even in kernel (this is hardly ever an issue in Minix to start with). But nay.
Thanks for the pointer to Minix 3! See also FONC (Score:2)
http://www.minix3.org/ [minix3.org]
How workable could it be as a general desktop at this point, like to read email and browse the web? And do some development whether with Eclipse or something else, for C, Java, and JavaScript)?
Does Node.js work on it yet?
http://stackoverflow.com/quest... [stackoverflow.com]
"Thanks! I did try getting NodeJS to work in Minix3 but it simply did not work, worked with a couple of guys and there are too many unresolved dependencies and its just a pain... I will try other microkernels and see if I have better lu
Re: (Score:2)
Re: (Score:3)
YeeS. They Aare.
plausible for some setups (Score:5, Insightful)
The IBM pricing really is quite high (there are a ton of licensing fees for the hardware, maintenance, and software). But the systems work reliably. You get a giant system that can run a whole lot of VMs, with fast and reliable interconnects, transparent hardware failover (e.g. CPUs inside most mainframes come in redundant pairs), etc. To get a similar setup on commodity hardware you need some kind of "cloud" orchestration environment, like OpenStack, which can deal with VM management and migration, network storage, communication topology, etc. The advantage of an x86-64/OpenStack cluster solution is that the hardware+licensing costs are loads cheaper, and you don't have IBM levels of vendor lockin. The disadvantage is that it doesn't really work reliably; you're not going to get 5 9s of uptime on any significantly sized OpenStack deployment, and it will require an army of devops people to babysit it. The application complexity also tends to be higher, because failures are handled at the application level rather than at the system level: all your services need to be able to deal with non-transparent failover, split-brain scenarios, etc. Also the I/O interconnects between parts of the system (even if you're on 10GigE) are much worse than mainframe interconnects.
Re:plausible for some setups (Score:4, Interesting)
Besides the price, I'm always on the fence regarding IBM's approach to licensing. On one hand it feels like having an itemized bill with individual licenses and fees for everything down to individual screws gives more control to the buyer (as opposed to a "bundle" where one could feel like he's paying for stuff he doesn't need), but in my experience it's almost impossible to seriously weed out (or even understand) items from the list.
My best billing experience has been in a small business that was using Dell's financing. No big upfront cost, a simple monthly amount to pay. Need one more server or ten more workstations? No problem, the stuff is delivered and the monthly amount is increased by $200. Awesome.
Re: (Score:3)
The licensing model on an IBM mainframe is different depending on what OS you run (and which CPU you use). In my experience, the really prohibitively expensive model is when you run z/OS + certain 3rd party packages, because they tend to charge you for number of seats, CPU cycles, etc etc etc. I think one of the reasons why IBM went down the linux and Java routes was exactly that - it appears they can't easily move away from the old licensing model on z/OS, but you can run both Linux and Java really cheaply
Re: (Score:3)
And what you say about mainframe stability vs other HW - I don't think this is entirely true.
If you mean just individual servers, I agree, regular HW isn't too bad, and doesn't take a ton of admin power. I was thinking more of the case of replacing a big mainframe with a cluster, which has a whole different kind of administrative overhead. You can generally assume that a mainframe stays internally connected and working: CPU cards don't randomly lose connections to each other, your database and application s
I didn't think they called them that these days (Score:5, Funny)
IBM dude: It might look like a mainframe, but it's a high-capacity, legacy-compatible, fault-tolerant application server.
Me: What's the difference?
IBM dude: About 200 grand.
Re:I didn't think they called them that these days (Score:5, Interesting)
Have you seen those beasts? They come with earthquake kits (hydraulic suspension, gyros, etc), waterproof cables connectors (to keep working in a small flood) and nitrogen-rich fire-resistant enclosures. Drives are snapped in a backplane because loose cables are a liability, and IBM even provides an optimal distribution of redundant components inside the case based on their extensive records of hardware failures experienced by all their large customers in the last 20 years (because of course those machines are not serviced by the customers themselves).
This kind of big iron is definitely not a pimped pizza box. It is an amazing piece of engineering. Loud, expensive, inflexible, but truly amazing.
Re: (Score:2)
waterproof cables connectors
Of course they have waterproof cable connectors . . . because the things are liquid cooled! Do you think that liquid cooled computers were only for games?
If you get a chance to visit the IBM lab in Böblingen in Germany, they have some mainframes with plexiglass casings. The first thing that customer executives ask when they seem them, are, "Is that all that there is in them . . . !"
The next question is about the liquid tubes inside them. And then you need to tell the executives that the Internet
Re: (Score:2)
Erm? Why the hate? ...
Programming on a mainframe is much difference than 'ordinary' programming.
And there are still plenty more mainframes around than just IBMs
Re: (Score:2)
Yes. My first "proper" job was programming on the bastards. We weren't normally allowed in the machine room but we got a tour on our first day.
Not quite sure what that's got to do with anything I wrote, but never mind.
Re:I didn't think they called them that these days (Score:4, Interesting)
I disagree that they're inflexible. Capacity On Demand (COD) gives customers ability to pay for additional capacity (engine/CPU) only while that additional capacity is turned on. A mainframe is typically partitioned into LPARs at bare metal using PR/SM. You can add/remove/rearrange LPAR configurations. Then there is z/VM -- this is IBM's software crown jewel of mainframe software. This is where most shops run virtualized Linux servers. Create, start, stop, reconfigure guests as needed. You can reallocate storage among guests very easily. You can create software-only (virtual) networks for the guests. IBM's latest version of z/VM supports OpenStack. Many other features, these are just the main ones that come to mind. I don't call this inflexible.
Re: (Score:2)
Sounds like a wicked expensive VCenter cluster.
Re: (Score:2)
My focus had nothing to do with cost -- it was regarding the topic of flexibility.
Re: (Score:2)
Capacity On Demand, to a business guy, means you can buy additional capacity only when you need it, thus saving money. To a geek, it means that they deliberately cripple the system unless you pay them more money. I'm a geek.
Re: (Score:2)
And what, exactly, is the problem with that?
Re: (Score:2)
To a finances guy, it means you pay for all your hardware and its power use, but have access only to a subset of it most of the time. Inactive hardware costs exactly same as active hardware to develop and manufacture. You are paying for a "discount" on spare capacity with jacked up prices for active capacity.
Re: (Score:2)
You think idle cores take the same power as active cores? You think software costs the same when running on one machine as it does on a machine with the 446 times the performance? You think a finance guy wants to pay for the development costs of a 141 processor machine running at 5GHz when all he needs is a 5 processor machine running at 3GHz? Or maybe you think that if IBM wants to be able to offer machines of different capacities the actual hardware should be different, and by some unfathomable miracle
Re: (Score:2)
Of course! A physical 5 processor machine running at 3Ghz would cost peanuts in hardware and power compared to an intentionally crippled 141 processor mainframe. What really happens is you pay (cost - subsidy) for the crippled machine and you or someone else eventually pays (cost + subsidy) for the less crippled machine.
With software, the equation is better. If someone sells you an edition limited to certain number of concurrent transactions, there are no manufacturing costs for your copy that have to be pa
Re: (Score:2)
It is called the CEC (Central Electronics Complex).
It's like the "on a computer" patents. (Score:5, Insightful)
For example, the mainframe system can allow automated fraud prevention while a purchase is being made on a smartphone.
Because that's so much different than preventing fraud on a purchase being made from a desktop PC.
smartphone vs PC (Score:2)
No kidding. The only thing I can think of is that the interconnects and processing power of the mainframe allows more heuristics to be run on people's purchasing patterns. Odd pattern = fraud possibility.
Thing is, right now they often consider the individual's purchasing patterns. What about if a whole lot of people start buying from one company? Different pattern to be spotted, can still be fraud.
Re: (Score:2)
The difference is in the volume of transactions. You don't find many people purchasing their morning coffee (for instance) with a PC.
Re: (Score:3)
Actually it is. If you're buying something on a PC you're probably going through a vendor's website, like Amazon or the Apple Store. They are handling most of the transaction cost, CPU-wise, probably offloading some of it to a 3rd party at some point. It's an asynchronous operation - you're credit card gets billed within a few minutes, you're receipt comes a little bit after that. You have some time to do the credit history checks, credit card purchasing history, whatever else needs to take place for fraud
Re: (Score:2)
Actually, Apple Pay is just a fancy credit card.
The Apple part is only involved in setting it up - your phone talks to Apple who talks to you
Re: Yes it is different, actually. (Score:2)
What walled garden does Google have?
Re: (Score:2)
What walled garden does Google have?
You pose a genuinely interesting question - where, exactly, is the cutoff between "walled garden" and "open"? Google hasn't done much good in the way of proactively keeping their systems open - even the Nexus phones ship with locked bootloaders. KitKat severely limited the utility of MicroSD cards. Using an Android phone without a Gmail account isn't impossible, but it requires a whole lot of deliberate footwork. Lollipop is integrating some of the Samsung Knox stuff, as well as other security enforcing thi
they are dying (Score:5, Informative)
The death of IBM's mainframes is happening. It was never going to be an overnight thing though. We just replaced our 2 IBM mainframes which cost us just over 10 million each plus licensing and maintenance costs each year with around 2 million of intel based servers. Yes each of those boxes is almost a little mainframe in itself with 80 cores per machine and 4TB of memory, but they run at a fraction of the cost (with more total processing power than the mainframes they replaced) even when providing full cold standby redundancy. There are 3 other places in town that I know that also run mainframes, 1 has 6 of them all of which they have a 10 year plan to phase them out, another has 2 which will be gone by the end of 2016 and the last is the only hold out in town which is waiting to see how our replacement has gone (so far 6 months in and they are happy, another 12 months and the mainframes will be completely turned off).
Re:they are dying (Score:5, Insightful)
Thanks, that's an interesting comment. Especially with x86 servers getting fairly big these days (the 80-core, 4TB-ram monsters you mention), I can see that being plausible for some scenarios. Are all the services you ran previously each able to fit in a single x86 server now? If so that sounds like it'd greatly ease migration. One of the big pain-points of migration from mainframes to x86 clusters has traditionally been that it's hugely expensive to re-architect complex software so that it will run (and run reliably) on a distributed system, if it was originally written to run on a single system. But if the biggest single service you run is small enough to fit in your biggest x86 box, then you don't have to do the distributed-system rewrite.
Re:they are dying (Score:5, Interesting)
I'd be lying if I said it was easy and the architecture we have for the moment (temporary) is terrible as we basically did a giant recompile for the majority of the cobol code and have it running on a single server (though it uses at least 70 cores of that capacity for most of the day), long term it will be recoded with part of the savings from the mainframe decommissioning used to re-architect it to be a more scale out rather than scale up design. we had to extract all the batch, nightly analytics, archiving, forensics and various other processes on to separate machines recode some processes to be more parallel. 2 year project all up (plus a great deal of planning before that). it actually runs on 6 of those monster specced machines, only one of them is really ever stressed though, the rest are there for redundancy, testing and keeping a lot of the miscellaneous tasks off the core machine.
Re: (Score:2)
Re: (Score:2)
That's why you buy multiple with hot and/or cold standby and you can afford to replace them much more regularly. 2 million provide multiple machines of that spec not one. good design and architecture allow for failures, if you plan for the hardware to fail this isn't a problem and can be a much cheaper approach. Not to mention Mainframes fail as well. we had 3 hardware failures that brought our mainframe down over the past 5 years.
Re:they are dying (Score:5, Insightful)
This is starting to sound more and more like bullshit. Were all of your cobol programs completely self-contained (highly unlikely). You didn't use any CICS, IMS, database, or any other middleware? You didn't use any VSAM datasets or any record IO? You didn't have any dependancies on JCL associating 'files' to 'datasets' and specifying how files should be opened and what should happen when they are closed?
Re: (Score:2, Insightful)
Oh, and what is the name of the place? I don't want to have an account there for at least 10 years.
Re: (Score:2)
most likely you already have accounts with places that have replaced mainframes and don't even know it, most don't like to publicise it due to bias and ignorance like yours. When we were looking at our mainframe replacement we spoke to banks, government departments and large insurance companies all over the world who had all made the transition but to discuss it every last one of them demanded NDA's, especially the banks. Partly it is they didn't want the competition to be aware of what they are doing, but
Gotta love them ... I do. (Score:2)
Look to the future and imagine the internet of things ... yet all those smart devices relay on the big data consuming services of their proprietors.
Why are mainframes so hard to replace you ask yourself. Versatility is my answer.
Sure now, how can a machine weighing tonnes be agile and able to adapt to an ever changing world?
The answer is within that ever changing world itself, ask yourself what is a mainframe exactly?
I see mainframes as ultra concentrated bulks of technology, not measured in mips or dry-sto
Re: Gotta love them ... I do. (Score:4, Insightful)
Having to rewrite 4 decades worth of COBOL is also a prohibitive factor.
Re: (Score:3)
You need way more than a COBOL compiler. You probably also need CICS and other middleware. You are not going to have VSAM files, etc.
Re: (Score:2)
you would be amazed at how much stuff has been created to make porting easier, there are even emulators for JCL. Their are migration tools for VSAM too.
Nah. (Score:2)
Re: (Score:2)
Rediscovery of the mainframe my ass (Score:5, Interesting)
"We kind of rediscovered the mainframe," says Peri.
Everytime IBM announced a new mainframe line of products they hired an "external" consultant to say exactly the same bullshit. Since 1990, they announced the rediscovery of the mainframe at least five times. IBM is addicted to the mainframe given the large chunk of revenues associated to it. So, they serve us the same marketing bullshit each time.
Green with Envy (Score:5, Funny)
"The name change serves to signal
Us old farts are envious of the new digital mainframes - we were seriously handicapped back then, working on all those old analog mainframes.
It isn't that mainframes are eternal, it's that marketing wonks who write this sort of stuff are allowed to breed...
Re: (Score:2)
Maybe they mean digit as in finger. Touchscreens and all that.
73 Years Late (Score:2)
IBM releases the Z9M9Z...
Death to the MLC (Score:2)
Re: (Score:2)
I love the platform operationally. The licensing sucks in every way.
"prevailed"? (Score:2)
I think "prevailed" is a bit overstating things. Mainframes have more "held on" despite the march of the killer micros.
Re: (Score:2)
nonsense, there is plenty of work that those micros can't do, upon which modern civilization absolutely depends. There is no substitute. No one who has done serious financial IT work ever believed the mainframe was dying or going away.
Re: (Score:2)
Nobody expects a single desktop PC to do the job of a mainframe. It's true, too, that mainframes have their place for certain types of work. Don't claim, though, that a rack full of blades can't be clustered to do a similar job. It happens all the time.
Re: (Score:2)
No, rackfull of blades can't handle the hundreds of IO channels nor have 10TB common RAM that this mainframe could.
who are the customers? (Score:2)
Re: (Score:2)
Only for organizations that need a dozen of those (Score:2)
No matter how reliable and maintainable the box itself is, I wouldn't want my business to lose days or weeks of revenue because the datacenter was swallowed by a sinkhole. Having hundreds of cloud compute instances around the world also helps compensate for network latencies and quickly cut expenses during a business downturn.
I am sure there is a scale at which it makes sense to have dozens of these boxes rather than many thousands of separate instances. Just not sure if volume is enough for IBM to recoup t
Re: (Score:2)
> swallowed by a sinkhole
Or cratered by an asteroid!
Re: (Score:2)
That is why there is GDPS (Geographically Dispersed Parallel Sysplex). Have you ever heard of a bank or credit card company or reservation system losing 'days or weeks' (or even seconds) of data due to a datacenter outage? Do you think that is because by some miracle there has never been a datacenter outage?
Honestly, you guys really need to learn what modern mainframes are and what they do before you go spouting off.
Re: (Score:2)
Hence the point that you should not buy those unless you can afford a dozen. If I only need the power of one of those, I would be better off purchasing less powerful/cheaper systems to distribute worldwide.
Re: (Score:2)
Why do you need 'a dozen'? You need 2 for redundancy. And they don't even need to be the same. The largest machine has 446 times the capacity of the smallest. Your primary datacenter is configured for your main workload. Your backup datacenter can be the smallest capacity machine with enough CBU engines to cover the workload of the larger machine. CBU engines cost a fraction of a regular engine. When a disaster happens, you activate the CBU engines and the workload is transferred to the backup datace
Re: (Score:2)
You are going to serve your Chinese customers from US datacenters? Hehe. Connectivity between world regions is glitchy and high latency. Amazon, for example, provides 9 regions for its compute instances and they don't spend money for these datacenters just for the heck of it.
Even within a region, you are proposing paying for network and other equipment to handle 100% of peak traffic in each datacenter, while one sits idle most of the time. It would be much cheaper, and provide lower latency, to have 3 datac
Every 10-15 years, I hear the same thing (Score:3)
"Mainframe declared dead, film at 11".
And within a year or two, IBM announces that they're shipping more mainframes that year than they've ever sold before.
Datapoint: around 2001 or so, some crazy at IBM, using VM (IBM tech first developed in the seventies), maxed out a good-sized mainframe... running 48,000 *seperarate* instances of Linux, and it ran happy as a clam with "only" 32,000.
How many VMs you got on your server?
*I* have a nice toolbox in my head, with hammers and wrenches and screwdrivers, on how to program on everything from MS DOS to mainframes to Linux. I also know how to admin all but the mainframe, but know something of that. What, Ah say, what do you have to compare, Boy? One ballpeen hammer that only works in Windows?
mark, who prefers to use the right tool for the job
Come on, now. (Score:2)
Who didn't think mainframes were like the epitome of cool back in the 80s/90s? Who doesn't, now, want one of those massive computer-as-art Cray installations, with the comfy couch and the processor coolant trickling across some sculpture that served as eye candy as well as a radiator, and the subtle blinkenlights flashing away, seemingly at random? And then came the Beowulf.
Re:2.5 billion transactions a day (Score:5, Interesting)
Mainframes are like really big industrial cars where everything is hugely expensive. They're stupid expensive, but far cheaper than trying to do massive amounts of work with thousands of pickup trucks.
It's like the transporter they use to move the space shuttle with rockets and all ready to go:
http://en.wikipedia.org/wiki/C... [wikipedia.org]
It goes 1MPH, which sounds pretty wuss-tastic in car terms, until you consider how much capacity it has at that speed. It would be basically impossible to accomplish the same thing with any number of VW Beetles without spending years taking apart and reassembling everything each time you wanted to attempt a launch.
That's where mainframes make sense - problems which are really massive, but need to run on one computer. Any problem that can be broken down into smaller chunks can be solved much more efficiently with a network of smaller computers.
As the smaller computers continue to get more and more capable and the technology to break down problems and high speed interconnects become more common, the jobs that run better on a mainframe get more rare and networks of servers become more common.
Mainframes do have one cool thing going for them that is not respected on smaller machines - portability. There's code that's been in use for several decades on mainframes running in a stack of emulators. Each new mainframe gets an emulator to make it possible to act just like an an old mainframe. This means the customer needs to run their code on the emulator instead of having to tweak the code to work on the new mainframe. For jobs that justify mainframe costs, downtime is very expensive, so minimizing additional conversion efforts is huge. Also, it's entirely possible that the last person who knew how some mission critical code worked may have died 40+ years ago and business people aren't big proponents of hiring someone to figure out and rewrite legacy stuff.
Re: (Score:2)
Mainframes do have one cool thing going for them that is not respected on smaller machines - portability. There's code that's been in use for several decades on mainframes running in a stack of emulators. Each new mainframe gets an emulator to make it possible to act just like an an old mainframe.
Actually, since System/360, each new IBM mainframe got a CPU that executed an instruction set that was a superset of the previous mainframe's instruction set, just as, for example, an 80486 executed an instruction set that was a superset of the 80386's. They did have to provide a mode bit for, say, 24-bit addressing vs. 31-bit addressing, but that's about it - there's also a difference between 64-bit mode and the 32-bit modes (24-bit addressing and 31-bit addressing), but that's true of just about every 64
Re:2.5 billion transactions a day (Score:5, Interesting)
The mainframe people I know, when they rarely refer to transactions, have a slightly different meaning from when windows or unix people do it. The mainframe people more often rever to messages, which is a whole discrete task, which can often require multiple database transactions, some computational passes etc. They usually talk about hundreds of thousands of messages per hour, so if it's 2.5 billion mainframe-style "transactions"(messages), it's pretty damn impressive.
Re: (Score:2)