Why The Dinosaurs Won't Die 580
DaveAtFraud writes "Ace's Hardware has a nice introductory article to the animal that will not die: The Mainframe. Ever wonder why these things are still around and what makes them different from a PC or UNIX box? The article is IBM-centric so there's no discussion of say the CDC Cyber series but when most people don't even believe that mainframes exist anymore, what the hay, let's disabuse them of that notion first. Hopefully, the author will follow up with the additional promised articles that go into more technical detail but this is a good place to start. I wonder if they still make card readers, too?" This guide came out last month, but it's worth looking through, even just for the pictures.
Progress measured by size (Score:2, Interesting)
Re:Progress measured by size (Score:2, Funny)
The number of Hitchhiker references in an article is directly porportional to the complexity of the hardware under review.
Pft, overanalysis (Score:5, Insightful)
Nobody in their right mind is going to mess with them until they absolutely can't get strung along anymore, because they know that crashing, say, a HMO's appointment handling system would be what we call a "career limiting" move.
If it ain't broke, don't fix it. If it ain't broke and it's mission critical to the tune of millions of dollars an hour, avoid it like someone carrying the plague, ebola, leprosy, herpes and a bad hangnail.
Re:Pft, overanalysis (Score:2, Insightful)
Ah, yeah, we sane people can say that. But, haven't you ever met the kind of department head who comes in and says, "No Sir, I don't like it", changes everything and then departs before the fecal matter hits the impeller? Stuff happens. Even when conventional wisdom screams, "No, you fool, leave well enough alone", because change, goals and accomplishments are what advancement minded people look to as opportunity, and usually its the peons who get blamed for when it doesn't work, not the guy who broke it.
Re:Pft, overanalysis (Score:3, Insightful)
If the company is big enough to use a mainframe, this just isn't going to happen. They're going to need a hugely compelling reason to switch off of the mainframe because it's almost assuredly going to be a multi million dollar project spaning months or years. Even CTO's generally have to have a project like that approved by a bunch of different people. How's he going to sell it?
Including cost factor of downtime (Score:3, Interesting)
Locally, one of the larger local businesses has an old beastie made of wires and PCB that can not be shut down? Reason, to turn off the apparatus it's connected to would require a lot of work to get it warmed up again, and having that particular apparatus off would probably mean shutting the entire plant for a certain period of time...
a.k.a, not something you want to mess with unless you've tested, and tested, and tested, and scenarioed, and prayed a few times before frantically moving things over to whatever the new configuration is.
And in such times, isn't it murphey's law that you end up with an event like "what do you mean you forgot the power cable at the office?!!!" just before/when going live.
Re:Pft, overanalysis (Score:3, Interesting)
Your idea is right, but your conclusion isn't. A mainframe is stable as hell... The application (a HMO's appointment handling system)running on it, will crash and burn like any other application.
So it doesn't answer really why mainframes are still around.
If anything could be said in detriment to mainframes, it could only be at the hardware level (like hotswapping CPUs, and IO devices), but Sun machines can already do that sort of stuff...
Re:Pft, overanalysis (Score:5, Insightful)
Re:Pft, overanalysis (Score:3, Informative)
You say already like that's a good thing, but IBM had the capability decades ago and Sun are only really just catching up.
Re:Pft, overanalysis (Score:2)
My limited understanding of COBOL is that there is no dynamic memory allocation. Try writing a C program without any dynamic memory allocation. It may be hard, but I'll bet it didn't crash.
Joe
Re:Pft, overanalysis (Score:3, Interesting)
Re:Pft, overanalysis (Score:4, Interesting)
People who write mission critical apps for mainframes program differently. They wear both belts and suspenders in their code. They do precise error condition tracking and recording and when the app does crash, they make sure that the data was not corrupted so it can restart. They test for months (hell, years) before putting new versions into production. They basically program as if reliability is their number 1 priority - because it is - forsaking speed, code cleverness, memory space, anything that would get in the way of targeting less than 30 s. of downtime per year or better. Oh yeah, it makes development slower, too. That's the hardest thing about developing reliable software - the pace is different. Shipping tomorrow, but sacrificing reliability, will kill you in this market. A lot of PC folk don't understand that. Software written for these environments are built like tanks. They may not be pretty and they may not get you there as fast, but the will get you there come Hell or high water. And that's why people still use these systems - not hardware, not software - but combined systems of the two.
This is all well and good... (Score:4, Interesting)
Take google for example. Their software flags failed units, brings them offline, and once a week they go pull them out of the racks and replace them. I believe google builds their own, but for less agressive businesses you could just buy enough dell to tolerate as many failures as needed, boxing them up and shipping them back to dell when they go south. Heck, dell will likely send you back an upgraded unit anyhow, so you get a rolling upgrade
Just like the network guys learned the lesson of ensuring end to end reliability accross an unreliable network using TCP/IP, some companies are realizing that reliable computing can be enabled by clusters of pc's. It's a shame the free software/open source crowd hasn't rallied around this more... supporting this at the OS level could prove very powerful.
For a good example of what I mean, compare Traakan's SAN systems to more traditional products, like from EMC.
Re:This is all well and good... (Score:5, Insightful)
In my experience, this stuff hasn't changes significantly in years -- it's tweaked now and then, but it basically works and as such isn't messed with.
What you have to remember is that entities who are still using mainframes are both (a) very large and (b) very well established. The mainframes tend to be involved with really important tasks that are mission critical (and I mean "mission critical" in a very real sense, not in the 1999 out-webserver-is-down way), like flight reservation systems or bank account tracking systems.
What I'm trying to say is that it's a really bad idea to mess with these systems unless you really have to -- anyone with a couple years at a suitably large company could tell you that there's nothing to be gained and everything to be lost by messing with them. The hardware and support costs are laughible if you compare them with what just a few minutes of downtime from buggy new software would cause.
Re:This is all well and good... (Score:5, Funny)
Re:This is all well and good... (Score:5, Funny)
You can't expect any good mainframe developed in the year Leonardo da Vinci was born. All mainframes from the middle of 15th century suck, my word!
On the other hand, if you mainfraim was from the end of 15th century, you could at least expect this genius to do something about it.
Re:This is all well and good... (Score:5, Insightful)
You don't want your bank using the same unreliable hardware. Do you want to wait a week while the maintenance guy comes along to replace the failed node that held the records of your last deposit?
Mainframes are built for customers who simply can't take downtime or data loss. Some businesses can, many can't. If you build a bank off this idea, let me know. I'll be sure to stay away.
Re:This is all well and good... (Score:2)
Two words, Sequenced Transactions (Score:5, Insightful)
So you need to tag every transaction with a unique sequence number. This is really, really difficult when you don't have a single system with an amazing I/O throughput to assign those numbers.
A Google type solution uses a lot of execution units each with limited I/O capability. Queries may be parallelised without much interaction. In my example, every transaction must be synchronised. It doesn't matter if the application is spread over a cluster, the nodes must still coordinate to assign the sequnce number.
I agree though with your point about adding better cluster management though to open source operating systems. However, this is much more difficult than improvements to a standalone system because how many people can afford to run a cluster of say 4 or more systems for playing around.
Re:Two words, Sequenced Transactions (Score:4, Informative)
Are you sure?
I seem to be thinking of an identification technique involving numbers. IIRC, it was highly distributed. Each client in the system was given a 32 bit numerical representation which was used as an "address" to communicated with the other clients. These "addresses" could be assigned dynamically by various agents who were authorized to destribute a subset and report which client had which address.
The whole layout was mainly hierarchical, and completely unsynchronized.
In case you haven't caught on yet, I'm talking about the IP protocol. Its a demonstration that handing out numbers can easily be done in a distributed way.
Of course some transactions need to be sequential, like the ones you mentioned. That's why we have semaphores, and why individual records aren't usually distributed! This is basic database design, and there are plenty of good ways of doing it which DON'T require a huge amount of I/O.
Theres a good bit of Computer Science theory on the subject, and there has been for about twenty years. Many professional databases designed today can work in a distributed manner and almost all of them are capable of scaling.
Re:Two words, Sequenced Transactions (Score:5, Informative)
Currently the TSN is assigned through a cluster-wide 'semaphore' maintained by the distributed lock manager. However, one system at any time has the responsibility for logging the transactions (although the job can 'fail-over' to any other system. The design of the system means that every state change must be written out of the system so that if an individual system dies, the others can continue from the same point with no loss of information permitted unless a major disaster occurs.
Oh and you can forget databases as they tend to be rather slow. Recovery unit journalled ISAM files was the only way fast enough.
There may be a lot of CompSci Theory on this subject but there is very little that is relevant when you want a highly reliable system with several thousand transactions per minute.
Oh and this particular system is running the trading at CBOT, EUREX and XETRA.
Re:Two words, Sequenced Transactions (Score:4, Informative)
Relevant theory: (Score:3, Informative)
The goal is to pick fields & tables such that:
1) Locking is minimal
2) Dependencies are minimal
3) Storage size is minimal
4) Records are meaningful
The main technique involves decomposing a database to a minimal architecture based upon all possible elements in the database, and then building it back from the basis to the desired state.
It gives you specific knowledge of the conditions by which transactions may require waiting and a way to characterize that waiting, as well as how to reduce the number of transactions you need for a given task.
Of course, that's just the database design theory that one can apply. There's also the distributed information theories that can be applied. The most primitive approach to this is to use time stamp semaphores, but it can be extended beyond that. There is actually an area of database dependency resolution devoted to making locks. I imagine the "distributed lock manager" you spoke of uses it to minimize the amount of information needed to be locked at any given node.
In both of these cases (distributed info theory and database design theory), the formalism sprang from necessity - people invented creative ways to improve how their mainframe worked, and they used the formalism to describe it. I think it might even be right to say that without using the CompSci theory, you probably won't get a terribly reliable system. You'll get a kludge - it'll work, if you're lucky.
Re:Relevant theory: (Score:3, Interesting)
This means that it isn't possible to split the option over several systems, it must match on one system in case of combination trades. If it happens to be a big day for that product (say Annual Report Time), then volume will be very high. If it is an interesting day for the economy, say election time, then whoops, there goes our performance across all products.
Now if a transaction should fail, it becomes very important (legally so) that all transactions are unwound in the order that they were made.
The distributed lock manager was rather a neat piece of technology that Digital came up with for clustering VMS. It is sufficiently neat that there is a project to try and emulate it for Linux, interestingly enough one project from IBM. It allows for five different levels of lock to be held on a resource and each lock to be associated with a value. VMS uses it extensively for their clustered file system (one of the better ones). We use it for hierarchical locking of the order books (each product, CALL/PUT and strike for options and expiry/delivery date combination). The order books are sorted in price/time priority.
I have built smaller/simpler systems for other markets using databases and PC servers, using modern techniques. However looking at the monstrosity that I started working on about 12 years ago, I can't think of radical improvements without changing the exchange regulations, particularly with regards to those pesky regulations. I guess the best would be to convert it to Linuz but run it in multiple VMs on a Z-series mainframe.
Re:This is all well and good... (Score:4, Informative)
Clusters like google can give you enormous compute capability, and a form of redundancy, but they can't give you the type of error checking and correction done in the mainframes, like the self-checks done by the paired CPUs. (At least not practically.)
A couple of years ago I read an article that pointed out that todays desktop PCs have equal or greated CPU power than a 1970s mainframe. But when you measured IO capability, the mainframe would still wipe the floor with the PC.
Theres little wonder in that. Look at all the IO channels and processors that the mainframe has. Instead of moving every byte between peripherials with the CPU, the mainframe tells one of its IO processors: "Move that data for me, and tell me when its done."
A typical task for a mainframe might be (every night): Read the financial records of my 10 million customers with their average of 3 accounts, 8 mutual funds, etc. Inactivate closed accounts. Activate new accounts. Put in all of the deposits from cash, checks, wire transfers, refunds, etc. Subtract the withdrawls from cash, checks, wire transfers, refunds, etc. Update the number of shares in the accounts. Now apply interest to every account. Find and report all accounts that are: overdrawn, below minimal balance, over limit. Apply penalties. You get the picture. Even if you could do this with a cluster, all that you've done is move the point where the massive IO occurs from the mainframe to a huge, expensive, database cluster to service all of the IO. (It won't be on MySQL either.) Might be simpler than a mainframe. Proabably not.
Google uses the large number of systems for more than redundancy. It uses them for caching its database in ram. They figure that the extra speed from ram caching reduces the total number of systems that they need. So, perversely enough, they have a lot of machines to save them from having even more machines.
I'm happy letting google/SETI/Folding/etc.. search, crack, whatever.
I want a mainframe handling my bank account and mutual funds.
Re:This is all well and good... (Score:5, Insightful)
The main feature of mainframes are the staggering amounts of data it can move. The mainframe is like the bulldozer of the Computer world. The CPU is terribly slow at certain operations - run X11 on it, and have 20 people log in - say bye bye to your performance. But the amounts of data it can move, and the speed with which it can move that data is nothing short of amazing. Oh, and let's see you doing processor lock-stepping on a PC-based cluster.
I can't believe you got modded up to +5 for this drivel....
Re:This is all well and good... (Score:2)
The nice thing about technology such as RAID and clustering for the lower-end hardware, is that now we can make our systems as reliable as we need them to be for our particular situation.
Re:This is all well and good... (Score:4, Insightful)
A cluster of PC's isn't even in the same league as a mainframe. PC operating systems aren't designed for that type of thing. Anyone stupid enough to try this is probably also stupid enough to try using Microsoft Cluster Services. And anyone who has seen Microsoft Cluster Services in action knows that it only protects you from hardware failure --- if Windows fails (and we all know that Windows is far less reliable than the hardware it runs on), you get two parallel blue screens. (Don't mod this up as 'Funny' -- I'm dead serious here.)
Linux is reliable but most of the clustering software we have available for Linux is geared more towards parallelizing an application and getting more work done with more machines, than towards N+1 reliability. You need to be able to have processes maintain their state in parallel on multiple machines -- not an easy thing to do.
Re:This is all well and good... (Score:3, Insightful)
Pardon my pessimism, but that is not reliability.
So Google can remove broken units and replace them later. But what happens to the work that was happening on that unit when it broke? Someones query gets lost, and they have to submit it again. No loss in googles case.
On the other hand, a Bank could not allow even one transaction to be lost to such a failure. In the mainframe discussion they talked about how even a running program, even an individual instruction, on a failed unit could be saved, moved and restarted on another unit. You can't do that on a PC.
A web server can be parallelized easily, but database servers are not so lucky. Sure, Oracle, DB2 and others can be run on multiple machines in parallel, but if one of the units goes down, so does its disks. Disk failover is not as seemless as the Mainframe Channel failover.
True seemless failover, down to the instruction, is something that takes a lot of effort. And there are some places where it is vitally important. Web servers are just not that vital.
IBM centric? (Score:2, Interesting)
Re:IBM centric? (Score:5, Funny)
A Quick and Interesting Read! (Score:5, Interesting)
I really liked this line in the section about modern IBM mainframe reliability:
Each CPU die contains two complete execution pipelines that execute each instruction simultaneously. If the results of the two pipelines are not identical, the CPU state is regressed, and the instruction retried. If the retry again fails, the original CPU state is saved, and a spare CPU is activated and loaded with the saved state data. This CPU now resumes the work that was being performed by the failed chip.
Try that with your dual-Xeon server!
Here's a good primer (Score:5, Informative)
Re:A Quick and Interesting Read! (Score:4, Informative)
Maybe power those mainframes... (Score:3, Insightful)
But I thought that (Score:3, Funny)
Re:But I thought that (Score:2, Funny)
On the other hand, they lasted a lot longer than mainframes will.
Re:But I thought that (Score:3, Funny)
Did anyone notice... (Score:2)
On the left you have the past.... and on to the right...the present
Why i think mainframes aint dying (Score:5, Insightful)
You're a big organisation thats been in business for 50+ years. You are in the biz of manufacturing Weezops (or whatever) for the various Gazaah(wtf?!) industries.
10-20 years ago you paid a big buttload of cash for a mainframe.
Today this main frame is chugging away. Occasionaly you need to screw in the vaccum tube, or maybe fill up the cooling liquid and in winter its a little noisy.
However, your little dino is happily chugging away, calculating whatever you want it and doing whatever it was that you got it for.
Its working. Its doing that you paid big cash for. You dont need it to make coffe, play videos, particpate in distributed.net or send spam. You want it to chug along. And its doign it.
Why change? Why pay another buttload of cash because someone is telling you "whoa, what you got here? an oversized heater?! pay another buttload of cash for this new machine that will do everything its doing PLUS play mp3s for you, make coffe, crack encryptions, search for ufos and connect your grandma to the net!"
I dont think so.
If a machine, no matter how old, is working, and you paid a lot of cash for it, no business will get rid of it to get something new just because its new/flashy.
Just like banks and credit card companies who still use systems like GlobeStar, 8 colors text based account management software written over 10 years ago. Why? because it does the job. Pull down menus, icons, angry slad shooting out of cdrom drives, live video straming, its all nice and cute, but if you have somethign that works, does the job the way you want it and how you want it, there's no need to change.
Sorry its so drawn out and long, but thats the way i see it. Plus I am sure you enjoyed the sleep
In words of a famous comedian, "Those are my ideals, if you dont like them, I have others"
Re:Why i think mainframes aint dying (Score:2)
That position has got to be damned near impossible to staff.
Economic inertia / Enterprise-scale applications (Score:5, Insightful)
The argument for what I call economic inertia is a good one, especially with corporate shareholders these days demanding that management squeeze everything they can out of every dollar and stretch every last penny as far as it will go.
A mainframe that does everything that you need it to do (and more) and works well with your company processes is worth far more to you than the investment of time and resources in an untested, unknown system that may or may not work. Remember that new systems don't go online until after extensive use and testing in parallel with the current one (if it's done correctly). That means duplication of efforts and resources.
Anyone who has worked at a company that builds enterprise-scale applications or mission-critical solutions knows that when the customer has an XYZ mainframe, you'd better have applications that support XYZ or you'll find the contract goes to your competitor who does. It's not an option not to support it.
Unless there is a strong business case for moving to a newer technology, mainframes will be with us for quite a long time.
A hint to the coders out there: the number of people who know and understand these systems is declining. There's a mint to be made if you can deliver services to support them.
Re:Why i think mainframes aint dying (Score:3, Insightful)
As an aside, I find a lot of people are confused about just what a mainframe is. Even at it's most complicated, a VAXCluster or a Data General machine was still just a mini. They lacked the hardware redundancy and pure I/O throughput of something like a 390. Mainframes are aimed at the business market, which cares far more about I/O performance. Most of the arithmetic that they're doing is still probably packed decimal for Bob's sake. Vector units and floating point units don't really matter when you're handling inventory and cash transactions.
Coming from a former computer operator... (Score:4, Insightful)
Re:GUI + Mainframe (Score:2, Insightful)
You mean like X11? Yes, XFree86 is fully supported on Linux for S/390. If you truly want a GUI on your server..
Now why on earth would you constrain it to work over HTTP? The design requirements of a "decent GUI" are very different from the design goals of HTTP. Or are you one of those inexplicable people who believe "tunneling over HTTP" == "web-enabled" == "good"?
HTTP was never intended for low latency. It was never intended for persistence. It was never intended for asynchronous server->client updates. (Or even client->server updates.) All of these are necessary for a decent GUI protocol. Some of them have been shoehorned into HTTP as time marches on, but I don't, in general, see the point.
Note, though, that the GUI front-end doesn't have to run on the mainframe. Indeed there are quite a few good reasons to run the front-end on a front-end server instead - or even deploy direct to the end-user. This basic architectural principle is called the "client-server model", and it works pretty well.
Re:GUI + Mainframe (Score:2)
Mainframes are a very expensive source of CPU time. They are very good with IO.
I worked a job where the 6 processor multimillion dollar mainframe was about half the speed of our workgroup dell 8way Xeon for CPU power. We had to share that mainframe with 5000 other employees. Running X apps on it would be a huge waste of resources.
An html/http interface could work. It would be very similar to 3270 terminal and would be very efficient on a mainframe (avoid memory allocation and cpu, just copy data from here to there...).
Joe
Re:GUI + Mainframe (Score:4, Insightful)
I keep hearing this, and it keeps making no sense.
If the application deployment guy and the firewall guy can't agree on whether to open the firewall port, the company has bigger problems. Somebody needs to be in charge.
In summary, using HTTP for the sole purpose of defeating firewalls is an arms race between two branches of IT. Now, arms races between competitors is what capitalism is all about ... but arms races within a company are pointless. You're supposed to be on the same team here! Instead you set up a situation where the app developers and the firewall admins both have to use increasingly sophisticated measures to do their jobs.
And don't give me that "but the firewall guy doesn't know how / can't be bothered to open up ports when we ask for them". That's his job. If nobody at the company has the time or skill to operate a firewall, you may as well not even have one.
I have to conclude that the real purpose of this whole fad of overloading HTTP with things that have nothing to do with HTTP is for deploy unauthorised applications - things the company doesn't know about and hasn't approved.
All I have to say is . (Score:2)
Why "dinosaur"? (Score:5, Interesting)
Mainframes aren't dinosaurs, and never were. They are the most advanced, most capable hardware available, and the proving ground for architectural innovations that eventually filter their way down into workstations (like using a crossbar switch instead of a primitive bus). Sun's dynamic systems domains, considered very advanced by the Unix world are still many years behind the mainframe LPARs, and Sysplex makes SunCluster look like a silly toy. User-mode Linux and Beowulf don't even come close.
Really, you should be asking why obsolete technologies such as the bus are still used in PCs, and why PC technology lags so far behind "real" computers.
Re:Why "dinosaur"? (Score:3, Interesting)
What? That big, expensive thing doesn't even have USB ports? Can I watch DVD movies on it? No? What good is it then?
The submitter of the article had a condescending attitude about mainframes, almost like he was begging the question of whether mainframes should exist anymore.
aka 'real computing' (Score:4, Interesting)
The IT industry has moved on, but these sorts of comapnies are very stuck in a 'if it aint broke, don't fix it' attitude (especially banks).
Whatever the reason (technically valid or not) the managers of these dinosaurs can't see that their 100,000 sessions or whatever it is running at all - even if their hugely custom software ran at all - using a huge cluster of cheap PC servers (oh look, we're back to a mainframe again!)
I think I'll be getting my power, insurance, phone bill, bank statements, car registration bills generated with one these old machines for a very, very long time to come.
In SOVIET RUSSIA (Score:3, Funny)
The world simply RUNS on mainframes (Score:2, Interesting)
When an hour of downtime would cost you millions of dollars, no question about it: you get a mainframe.
For the ones who don't read the article, a quick excerpt so you know what kind of availability we are talking about:
"[...] today's [mainframe] systems [are] so reliable that it is extremely rare to hear of any hardware related system outage. There is such an extremely high level of redundancy and error checking in these systems that there are very few scenarios, short of a Vogon Constructor fleet flying through your datacenter, which can cause a system outage. Each CPU die contains two complete execution pipelines that execute each instruction simultaneously. If the results of the two pipelines are not identical, the CPU state is regressed, and the instruction retried. If the retry again fails, the original CPU state is saved, and a spare CPU is activated and loaded with the saved state data. This CPU now resumes the work that was being performed by the failed chip. Memory chips, memory busses, I/O channels, power supplies, etc. all are either redundant in design, or have corresponding spares which can be can be put into use dynamically. Some of these failures may cause some marginal loss in performance, but they will not cause the failure of any unit of work in the system."
We tossed the same thoughts around at work... (Score:5, Insightful)
I'm sure the humblest x86 can now run rings around old PDP 11 and IBM 360 systems, but it's still amazing how fast some parts of those old machines were, including core memory swap disks.
Re:We tossed the same thoughts around at work... (Score:2)
At work we have a few hundred sparc-20's (modified 1cpu), supporting thousands of calls at a time, and keep track of each packet for billing.
The CPU's might be slow, really slow compared to 3ghz P4's, but they do the job just as well, just as well the day they came out, all those years ago.
They will neve die here is why (Score:5, Insightful)
Re:They will neve die here is why (Score:2)
A GUI is sometimes unavoidable. Sometimes you need the extra flexibility (ie: to be able to put arbitrary dots on the screen as opposed to having to pick them up in Tetris Like fashion from the character set (pallete?).
GUI and Terminal are complementary (for example, I am better of having 6 terms under a GUI system than having only 1 terminal at a time).
Re:They will neve die here is why (Score:5, Insightful)
A couple of weeks ago I had the unpleasant experience of going to the dentist four times in ten days. (Slashdotters note: this is what happens when you avoid going to the dentist for three years.) However, whilst sitting in the waiting room in terror over the prospect of being assigned the newbie of the two dentists, I observed a curious phenomenon in progress:
I was a little bit surprised when I noticed that this system wasn't made of Web forms -- though the systems on the desk were Wintel PCs, they weren't running Internet Explorer. Nor were they running a GUI front-end to a database, some PowerBuilder or MS Access widget conglomeration. No, the application running on those PCs was ... an IBM 3270 emulator.
"There you go. Now move down to 10:00 ... now F10 that ... and hit F6 to print."
From the dialogue between the two receptionists, I could tell several things about this application. First off, it certainly required and expected a certain amount training to use. To submit a form to the mainframe (located at a distant data center) required hitting F10, not clicking on a "Submit" button. There was no concession here to being "intuitive" -- the trainee simply had to learn that F10 means "submit form".
Yet this was consistent -- F10 always meant "submit form", at every stage of the workflow. (So much so that the elder had made "F10" into a verb, as you may have noticed above, meaning "to submit form".) No unexpected dialog boxes came up with panicky but unnecessary messages, needing to be clicked away. The application's behavior created a consistent, predictable, learnable workflow. The elder receptionist spoke with complete confidence about the system's behavior, though she was certainly not an "IT person" -- in however many years she had been using it, I suspect it had never failed her once. This was not an application that she expected might crash or do something stupid and eat an appointment. Nor had it been "upgraded" three times in the past year to a version with fancier and completely unrecognizable widgets.
Now, I work in IT. I spend all day with Unix, Windows, and Mac users. I also make a point of observing people's interactions with other data systems -- Windows-based supermarket cash registers, handheld card scanners at conferences, information kiosks at tourist attractions, and so forth. Rarely if ever do I hear the sort of quiet confidence in the computer's behavior which I've observed in end-users of mainframe applications.
This is not "computer as irascible demon, seeking to lash out at its summoner," like Windows. It isn't "computer as consistent and friendly but sometimes fumble-fingered servant," like the Mac OS. And it certainly isn't "computer as Necronomicon," like Unix.
It just works. So of course its users depend on it.
Re:They will neve die here is why (Score:3, Interesting)
BTW, the keystrokes for WordPerfect for DOS were taken partly from old mainframe conventions (I've been told that's why F7 is "Exit" in WP and many other apps).
Re:They will neve die here is why (Score:3, Insightful)
Since mainframers culturally think in terms of building pyramids and the smaller machine cultures strike me as building strip shopping centers, it shouldn't surprise you but there is no reason you couldn't be as consistent with the mammal machines.
Re:They will neve die here is why (Score:2)
Phone IBM. Tell them you want to buy a mainframe. tell them you want Linux LPARs. Ask them how many concurrent users you can serve with X11. Mainframes do batch processing. Everything else is either a hack, or a wrapper for batch processing. The IBM site has an interesting redbook on the subject.
Several years ago.... (Score:3, Interesting)
Productivity went right in the toilet. The users, some of whom had only been in the department, and others who had been there from years could not use the new GUI with any degree of efficiency. Long-time coders and CLI users know how important it is to keep the hands on the keyboard. Data Entry people, and others have the same requirements.
Several versions of the new software have gone by, and the GUI has been modified over and over, each time becoming more keyboard friendly. But in the meantime, the department has wasted many dollars in training, re-training, development, and paying overtime on the weekends to catch up on the data entry tasks.
A good terminal application, with well-designed screens and a key-oriented approach was thrown out for the latest, lickable interface. Just because it's old, or ugly, or doesn't drag-n-drop, doesn't make it obsolete.
Won't die huh? (Score:5, Funny)
Re:Won't die huh? (Score:5, Interesting)
And from the article:
The total I/O throughput capacity of the current z900 mainframes is no less than 24GB (that's bytes, not bits) per second. I have not personally had the opportunity to benchmark the performance on these latest systems, but while theoretical numbers can sometimes be misleading, I wouldn't be at all surprised to see a z900 performing as many as 100,000 I/O operations per second.
Immovable object, irresistable force, anyone?
Hey! (Score:2, Funny)
***ActiveSX files a patent on "Imagine a Parallel Sysplex of those" posts.
Why "Hitchhiker's guide"? (Score:5, Funny)
Four decades of years ago a group of hyperjobless pantemporal employees at IBM got so fed up with the constant calls for tech support from moronic users... that they decided to sit down and solve their problems once and for all.
And to this end they built themselves and the world a stupendous supercomputer encased in a very large steel framed box the size of a small city. It was so amazingly intelligent that as soon as its DSADs had been connected up it started from I think therefore I am and managed to deduce the existence of P2P and the great wiki before anyone managed to turn it off.
On the day of the great turning-on, it said: "What is this great task for which I, the Mainframe, the second greatest computer in the Universe of Time and Space, have been called into existence?"
"The second ? There must be some mistake," said the programmer. "are you not a greater computer than the great Echelon at NSA which can predict acts of terrorism a year ahead in a picosecond?".
"The Echelon" said the Mainframe with unconcealed contempt. "A mere abacus - mention it not."
"What computer is this of which you speak?" he asked.
"The greatest computer in the universe", answered the mainframe after seven and a half years of comtemplation, "is the Beowulf ".
A new use for mainframes -- virtual machines (Score:5, Interesting)
This technology seems quite promising for data centers, etc, and will probably ensure the mainframe stays around for a long time to come.
Apples and Oranges (Score:4, Interesting)
Well, maybe, but probably not as much as you think. And in the end, if you have thousands of little headaches. With the mainframe you have one big headache, which you pay somebody else to have.
Consier the following scenarios:
Scenario 1: User needs upgrade to memory and disk space.
PC solution: Order new disk and memory. Dispatch trained monkey to install them and hope he dosn't screw up.
Mainframe solution: enter a few commands telling the mainframe to grant more resources to the virtual server.
Sceanrio 2: Group needs a new server.
PC solution: Order new server. Unpack and install. Dispatch trained monkey to install and/or configure OS. Figure out the best way to patch it into your network and dispatch trained monkey to do so. Integrate it with your network backup scheme and test it to make sure it's working
Mainframe solution: Select one of several preconfigured disk images (will you need postres or mysql? Apache? SMB?) and tell the mainframe to create a new virtual server using that image.
Scenario 3: Computer user reports hardware failure on his server
PC solution: Dispatch trained hardware monkey to swap parts. Dispatch trained sysadmin monkey to make sure everything is OK.
Mainframe solution: none needed. The hardware doesn't fail. You made provision for power backup, lightning, earthquake and flood protection, backups for your datacenter, and you don't have to keep revisiting the problem.
Re:A new use for mainframes -- virtual machines (Score:3, Insightful)
- how many virtual power supplies are going to need to be replaced?
- how many square feet of real estate are going to be used up by their virtual racks?
- how many miles of virtual cable and/or fiber are going to be laid underneath the clean room floor?
- how much does an idle virtual server cost to operate during off-peak hours? (OK, it's a trick question. When you can dial up new images on-demand, you need never run more virtual servers than are absolutely required.)
- when capacity planning finally admits their estimates were 20% below actual usage, how much will it cost (in both time and money) to dial up another 20 virtual servers to meet the workload?
- how many virtual servers will receive an automatic 'upgrade' when the host box gets a performance boost?
It's funny how people say "Linux is great for legacy hardware" when talking about their $500 486sx, but not for a $500k s370.
Re:A new use for mainframes -- virtual machines (Score:3, Insightful)
Bizarre idea...
The company that I work for has a massive data center in the Phoenix area. Over 100,000 square feet of space to accommodate thousands of Unix and Windows machines, as well as our mainframe systems. The building houses only 125 of the hundreds of employees involved with the support of the machines; the rest of the workers are in another building a few blocks a way. Several miles away, we have a redundant data center -- same size and same number of machines -- sitting idle with only a few employees working there (mostly security guards).
Consider for a moment the huge facilities cost of cooling 200,000 square feet of raised floor during the summer months in Arizona. Or the cost of electricity for thousands of servers. And don't forget the cost of general maintenance of such large buildings. Sounds expensive, doesn't it? Might be a pretty massive cost savings if we could eliminate a significant number of those Unix and Windows machines and move to a data center a quarter the size.
Even if we ignore the facilities, there is money to save elsewhere, by looking at the actual usage patterns of our hardware. Every night, the mainframes sling terabytes of data in massive batch jobs. But during the day they mostly sit idle. The Windows and Unix boxes show the exact opposite usage: busy days, idle nights. Why use the mainframe for Linux stuff? Why the hell not! Who cares how much it cost up front, that's a sunk cost. Which is more expensive and inefficient: use the hardware for Linux emulation; or let it sit idle throughout the day?
The number of enterprise Linux applications is miniscule in comparison to those on available on Solaris, HP-UX and AIX, so you're likely to be developing them in-house - why bother, if you're spending that sort of cash on the hardware, I sure you can afford some decent software?
For a small 50 - 100 person company this may be the case. But look at a company like Merrill-Lynch: tens of thousands of employees with an annual IT budget that is measured in hundreds of millions of dollars. And they have embraced Linux.
When a vendor walks in the door at M-L, looking at a $15 million a year licensing deal, they listen to the customer's needs. And if the customer wants the product to run on Linux, then you can bet your ass they will make it happen. Vendors that don't offer a Linux version are fast becoming the exception.
True Shocking Mainframe Stories (Score:5, Funny)
So a couple months ago I went to apply for a new library card (haven't used the system in like 10 years). When I turned in my application, the Librarian ran my info through the system and informed me that I had an eight dollar overdue book fine outstanding from 1987. Ouch. Place was pretty crowded, too, she could've said it in a quieter tone of voice...
Re:True Shocking Mainframe Stories (Score:2, Funny)
Think that is painful, wait until they bill you for the interest and the cost of carrying that info for all these years.
Another MF story: I worked temporarily at this gov place that had a mainframe. I once overheard the mainframe manager complain that revenues for computer time were down when they upgraded the machine because it could do more per slice of time. He actually decided to add a multiplier to the billed CPU time so that the revenue was the same. IOW, the clients (internal) were not going to get any savings from the newer technology. Sneaky.
How do non-mainframes track computer usage for billing, BTW?
What will kill the dinosaurs this time? (Score:5, Funny)
Perhaps a punch card virus... Then again, perhaps it will be when the smartest people in the world succumb to the growing ideal of technology for technology's sake.
Mainframe power - the reality (Score:2, Interesting)
The signs had numbers like 20, 43, sometimes as high as 60. The employees were especially proud of the 60s, explaining that each one cost more than 1 million dollars.
At first I assumed I must not have understood. I asked whether MIPS really stood for millions of instructions per second. They said yes. Then I asked what kind of instructions they meant: things like add, load, etc? Yes.
Finally I pointed out that my (at that time) $4000 dual 200 mhz Pentium Pro was rated at much more than 60 MIPS. I don't think they quite comprehended this.
By now every travel reservation system is ditching mainframes as fast as they possibly can and replacing them with racks of PCs or medium-end Unix workstations. By spending 1/50th as much money they get orders of magnitude more useful computation: those nice low-fare-searches you see on Orbitz and Expedia run on PCs, not mainframes. I've been in all the other travel reservation systems complexes since my 1998 visit and more and more you find little stacks of cheap "low end" machines doing the heavy lifting.
The reliabilty claims for mainframes are very deceptive. Yes, the computers stay up. But the software has bugs just like any software and data lines go down and the mainframes start dropping transactions left and right when they're overloaded. DASD's are multiported but top out at some low number just as any multiported device does, so mainframe-based databases often can't be extended beyond some point because the database drives simply can't be connected to any more machines. In the PC world we'd buy more machines and drives and maybe live with a little data incoherency but in the mainframe world eventually things just die because the hardware was built for everything but cheapness and power.
The general mainframe design is essentially targeted at the application profile of a static-page webserver. Simple programs, quick data access and throughput, no computation. They are utterly unsuitable for any computationally demanding task.
Re:Mainframe power - the reality (Score:2, Informative)
This is simply not true. I work at a company that uses 390 mainframes and TPF [ibm.com]
to handle travel reservations for airlines. When you use Obitz or Expedia you are using a pretty front end that gets all of its data from the mainframe.
There have been some systems that offload stuff from the mainframe. Notably, Orbitz stores fares because it can apply its own search algorithms and find fares for more esoteric travel iterneraries than can be done on the mainframe and it can do fare searches faster and cheaper. Where does Orbitz get their fare data? From the mainframe where it is still generated and updated. Orbitz simply caches that data and updates their cache on a regular basis. From everything I've seen there have been more new applications and sub-systems hooked to the mainframe for data than have been moved off the mainframe.
Re:Mainframe power - the reality (Score:5, Insightful)
MIPS doesn't stand for million instructions per second. It stands for Meaningless Indicator of Processor Speed. IBM never liked publishing benchmarks for mainframes because they don't say the whole story.
Mainframes don't run one application. They run thousands at the same time. I/O requests, CPU, and device contention are just a few of the many factors in a machine's speed. Just look at your PC. If you get the fastest dual Pentium, that just tells CPU spped. Put a slow hard drive and a 2MB video card, and any PC will seem faster. Mainframes are the same way so IBM has always been reluctant to publish numbers because businesses scream.
As for the software being buggy you are exactly right. The difference is that some of that software has had 20-30 years to work out the bugs.
And finally, yes, you are correct in saying that computationally demanding tasks using floating point multiplication and division don't perform well on the mainframe. Most businesses don't need to compute PI, so it was never a priority to IBM. Floating point addition & subtraction are very very fast if you write your application correctly.
The really sad thing that holds processor speed back on the mainframes is the software licenses. On a mainframe, the faster the machine, the more your software costs. This made it possible for smaller companies to buy a little mainframe. The big customers pay the most. This means you never buy a bigger machine than you need, because the software license costs get more expensive and no business wasts money.
Re:Mainframe power - the reality (Score:5, Insightful)
Of course, if you want to be "realistic" you'll have to use 128 Gb ethernet interfaces, since the maximum realized bandwidth on a full duplex circuit is around 1.5 Gbps.
Oh... what's that? Your bus can't even handle the full bandwidth of a single Gigabit ethernet interface? Well, then I suppose your I/O is going to royally suck in comparison.
Oh, and let's not even get on the topic of reliability... PCs just aren't. I'm a PC guy (I shudder at the thought of having to deal with mainframes), but I know their limitations. And while you're dead wrong about travel reservation systems running on PC clusters (they don't - the entire backend system is still on mainframes), whoop de doo if it was run on PCs. This isn't something where a node going down would cause major problems.
If a node goes down on the air traffic control system, however, you can damn well bet there's problems. Big ones. Weighing several hundred tons, moving at a few hundred miles an hour, and disinclined to stay aloft while you take a few hours to get the system back up.
maybe live with a little data incoherency
Yes... a little data incoherency is no big deal. I'm sure the power grid will work just fine with a "little" incoherency. You don't mind a power plant (be it coal, nuke, whatever) having a massive cascade failure every couple years, right?
I have absolutely no desire to ever work on mainframes -- the software in place is largely old and crufty, but by god it works. The hardware isn't old crap either -- you can buy new machines that will run the old software perfectly. And have capabilities that us PC weenies can't even comprehend. You realize that virtually every advance in the PC industry was tested and proven in the mainframe world first, right?
good grief (Score:2, Informative)
legacy apps are not the reason mainframes hang around. legacy apps last because of the incredible ease of centralized management on mainframes.
gone are the days of the dumb mainframe terminal, also. modern mainframe of today offer advanced graphics and windowed desktops. more often than not, the modern mainframe terminal is a low end pc with attached host print emulation.
increased miniaturization only makes for better mainframes. modern mainframes are just well put together microprocessor clusters.
mainframes make killer webservers: cheaper, faster, more reliable, smaller footprint, and easier to maintain than huge farms of pc servers.
please.
Until very recently....... (Score:2)
I recon that Linux will start to replace these Mainframes in the future.... Linux is becoming a standard for server OS's.... IBM's line of iron is already running it [ibm.com]
Tony.IBM and the Rest Of The World (Score:4, Funny)
I remember it used to be a cliche that "No-one ever got fired for buying IBM". Trouble is, I knew one IT manager in London who did get fired for doing just that at a Burroughs site.
I think you are all missing the point (Score:5, Funny)
But for those of you that still don't get it, here is a guide for the layperson:
It might be a mainframe if...
If you could kill someone by tipping it over on them, it might be a mainframe.
If the only "mouse" it has is the one living inside it, it might be a mainframe.
If you need earth-moving equipment to relocate it, it might be a mainframe.
If you've ever lost an oscilloscope inside of it, it might be a mainframe.
If it's big enough to be used as an apartment, it might be a mainframe.
If it has ever had a card-punch designed for it, it might be a mainframe.
If it weighs more than an RV, it might be a mainframe.
If lights in the neighborhood dim when it's powered up, it might be a mainframe.
If it arrived in its own moving van, it might be a mainframe.
If its disk platters are big enough to cook pizzas on, it might be a mainframe.
If Michael Jordan would need his entire annual salary to buy one, it might be a mainframe.
If keeping all of the manuals together creates a fire hazard, it might be a mainframe.
If it's so large that a dropped pen will slowly orbit it, it might be a mainframe.
If it's ever been mistaken for a refrigerator, (or if the disk drive
has ever been mistaken for a washing machine), it might be a mainframe.
If anyone has ever frozen to death in the room where it's kept, it might be a mainframe.
If it has a power supply that's bigger than your car, it might be a mainframe.
If it has its own postal code, it might be a mainframe.
If the operators considered the addition of COBOL to be an upgrade, it
might be a mainframe.
If it was designed before you were born, it might be a mainframe.
If its main power cable is thicker than your neck, it might be a mainframe.
If the designers have since died from old age, it might be a mainframe.
Re:I think you are all missing the point (Score:5, Interesting)
That's a lot.
Your CPU-RAM bus on your PC has less throughput (DDR-SDRAM 266 is CA. 2.1 GB/Sec), and your CPU-HD path (via DMA to RAM) is a not-very-funny-joke compared to it.
A cluster for similar throughput would hit the lightbulb problem (admin-monkeys running round swapping out burnt out PeeCees left-right-and-centre).
MAINFRAMES SHOVEL SO MUCH DATA IT'S NOT FUNNY.
And now Linux can run on them.
Be afraid.
"Mainframe" (Score:3, Informative)
I have documentary evidence from the dawn of microcomputing to prove it. It was the Main Frame of the computer, to which one attached Peripherals. Microcomputers just had very small Main Frames.
And this is why they will die... (Score:4, Insightful)
Re:And this is why they will die... (Score:2, Insightful)
I have Java programmers who whine for us to get a Linux LPAR, but when I try to talk to them about things such as filesystems, or anything which is fairly universal in the world of computers, and they are clueless, which shows they don't even know their beloved Linux (I love Linux, by the way).
So, is it the frozen mindset of the programmers which is to blame, or the cads who are teaching them?
And, c'mon... COBOL is EASY. Java has a much steeper initial learning curve.
And COBOL is faster.
I'm thick as a whale omelette.
Re:And this is why they will die... (Score:4, Insightful)
While I get to play with Oracle, Apache, Java, etc, the group I work with is only 10 people, where as not 10 feet away from is one of the many groups of mainframe only developers.
They have their 3270 emulators, program in COBOL, do some JCL, and there are a couple of hundred of them. Quite a number of them are under 30 (although there are also quite a few over 50).
Alot of these mainframers here are on contract from a few main agencies. These people are full-time employees of the agencies - places like EDS.
They're not dying out, because if they loose one, then EDS finds another monkey, trains it for a few months on JCL and COBOL and then puts them out on contract rates.
There seems to be a never ending supply of these monkeys who exchange their life for a boring, stable, if not well paid, job.
--
Still being bought... (Score:5, Interesting)
People are still buying the new mainframes and AS/400s (which should be lobbed in) especially now they run Java and new technologies.
Why ? Because of the support staff you require to run one. Is Unix harder than Windows 2000 are the people cheaper ? With these beasts its a mute question because YOU WON'T EMPLOY A SYSTEMS ADMIN for your server. You will outsource all of that to IBM, and they will make sure it works.
My favourite on this is being in a place with around 20 mainframes and AS/400s who had been asked to consider standardising on Windows going forwards. The IT manager's challenge to the sales guy was "How often does your stuff fail?" to which the sales guys asked "well when was the last time you had an expensive maintaince job on these servers".
The reply was that 4 years previously an IBM engineer had called to arrange a time to visit to replace a disk from the server which might fail soon. 2 years before that one had phoned to arrange a time to replace a processor board which was not performing correctly.
2 incidents on 20 machines in 10 years.
They elected not to move to Windows for infrastructure.
Then along came Java and suddenly you can buy these ultra-reliable boxes to run all of your newest and brightest applications.
Unix might whup windows, but OS/390 is Lennox Lewis standing at the back of the room with Ali smiling while they watch the little boys fight.
Re:Still being bought... (Score:3, Insightful)
And note that IBM called the system managers, not the other way around. The hardware notified IBM that maintenance was needed.
hardware reliability doesn't matter... (Score:4, Insightful)
Hardware design always has been (and probably always will be) WAY out in front of software design, and yet people are all too willing to spend the odd extra million on hardware while putting as little effort into software as possible.
In most companies they are clutching obsolete applications like life preservers, when in reality they are anchors.
I'm guessing you're a software developer :) (Score:4, Insightful)
God knows you're right! When I worked at very-large-retailer-to-be-unnamed in the IT department I was floored by how much crappy software they had built on top of their hardware. I can't remember how many times I thought, "Why not just use CVS?" or "Why do we have to use this thing?"
First, if you replace something that's working, even if it's working extremely ineffeciently, it might break. The perception of something breaking is about one trillion times worse to the PHBs and the execs than the perception of something working extremely ineffeciently, especially in a retail management mindset.
Second, especially if you have legions of data-entry people trained to use the extremely ineffecient software, then the cost to replace and retrain is higher in the short-term than to stay with the extremely inefficient system. PHBs and execs, especially in a retail mindset, can't thing about long-term cost savings in IT becuase IT is already a "cost center," not a "profit center."
In short, two reasons for bone-headed software in the enterprise: perception and cost. Mainly perception.
Mainframes VS web servers. (Score:2, Insightful)
While some companies have poured cash down the drain in order to use the latest buzzword technology, smart companies use mainframes with COBOL/CICS/DB2. Train your people once and only once.
What do webservers provide over this combination aside from pretty graphics? Not much. HTML based apps are the rich mans CICS. Granted, it isn't a glamorous career, but it is a VERY effective technology that is rock solid. Programmers that do PC work can't imagine working on the Mainframe. But it is very efficient.
The tech world has come full circle. Client server was hot for awhile, but very hard to keep the clients up to date in a large organization and requires bandwidth of the GODS to transfer all the data around. Oh, lets go to web services. Okay. Now we are back to the mainframe model. The centralized server model is basically this (Webservers) = (Mainfraim
The key characteristic of mainframes (Score:3, Interesting)
BTW, LPAR is just VM running in firmware. It allowed IBM to sell the advantages of VM (testing) to MVS customers who didn't want to "run" another operating system.
Outdated - maybe (Score:2)
Don't you just love mainframe emulators as well?
Nothing Wrong with COBOL and Mainframes (Score:3, Interesting)
A comfort zone is important to large, monolithic organizations. What works, works. Why change the old and reliable for something new and untried?
Some of my best friends make their living writing COBOL for mainframes; attempts by their agencies or companies to move to "new" technologies have been costly in both time and resources. If a green bar report provides all the information an accountant needs, why rewrite the system to use fancy HTML output that adds nothing but pretty colors? If anything, many web based systems reduce the amount of information available to make room for lots of unproductive frippery.
I spent the first 10 years of my professional career in COBOL on mainframes and minis -- CDC Cybers, VAX clusters, Honeywells -- doing some pretty boring stuff. I moved into PC programming 15 years ago, and I prefer it for a number of reasons -- but I'm not blind to the realities of the bleeding edge and the stupidity of modern PC software design.
Mainframe applications tend to accomplish very basic tasks in a simple way; even 10 million line COBOL apps are pretty straight-forward. The focus is on reliability and accuracy, not buzzwords. PC developers have an alomost pathological lust for the bleeding edge -- which gives us pretty but buggy applications.
On the PC, amid an embarrassment of riches, with more languages and tools than we can enumerate, we constantly throw out the old to chase the new. Windows would be as reliable as a mainframe OS if Microsoft spent more time on QA and less on figuring out how to make curved corners on plastic-looking window borders.
The people are dinosaurs, too (Score:3, Insightful)
Recently we added the ability for the students to pay their bills online via the web, taking a bold step into 1998, albeit four years late. In fact, we mainly only did it because another university in this state (the bigger one) did it, and we didn't want to look like we were behind. The software to do this literally just adds more layers to the mainframe process. That was easier than moving to a new system. While the seasoned web pro got to use ASP.NET and C#, I'm sitting here at the age of 25 writing COBOL from scratch to be able to post transactions he captures. That the process is disconnected and difficult to keep in sync no one seems to mind.
They say that we're getting a new, web-based system, "in about six months". I'm still not sure if this means no more mainframe, but apparently the project has been six months away for about two years now.
My coworkers fall into three categories - people younger than me who are still in school and are getting the heck out of here when they graduate, people my age who are married (like me) but they have kids and are completely stuck here, and people who are much older than me. One of my coworkers is literally a grandmother who codes COBOL and hates computers.
And that's really the big problem. I'm sure COBOL and Natural (a pseudo-scripting language for the ADABAS databases we use) are fine languages but you'd never know it by the way they're used here. I recieved no training once I got here - I was literally thrown in with a vague premise of further training, only to have the promiser go on to a better job. I was able to swim and get promoted within fifteen months.
People here aren't concerned with keeping their skillset up to date, they're more concerned with getting their kids to little league practice. The guy across the room from me is trying like hell to get a better job, but he's 56, divorced, in hellacious debt, he knows one thing (COBOL), and he steadfastly refuses to learn anything else. He's like the guy with a hammer who sees everything as a nail. He regularly gets turned down for jobs he's perfect for in favor of young, know nothing punks (like me).
A few months back (for some reason) they gave us VB.net training. While everyone in the room looked terrified of object oriented programming, I was making shit dance across the screen and rewriting everything in C# for kicks. That we're a 80% conservative university that's terrified of change doesn't help things either. My coworkers are mostly more concerned with keeping the new stuff out so that they don't have to learn anything new before they retire.
Now, I'm not saying that Mainframes are evil or that people's natural desire to stay the same is dragging anything down, but part of the reason Mainframes are still around is due to a complete reluctance to upgrade. Sure, at some point it will become inevitable, but most of my co-workers are ready and willing to put that off until after they retire.
And I'm not saying that everything should always be re-written in "flavor of the month" language to run on "hardware platform of the moment", that's not practical either. I mainly think we're seeing the results of a generation and a mentality that started at the low end of the Moore's Law curve and attacked it like any other job. People here don't see programming as a passion, but that thing they do until they go home (not unlike people who sell radio air time or something trivial like that).
As for me, I'm getting out of here as soon as I can.
Re:Ever wonder... (Score:2, Insightful)
He probably got tired of posting the only original stories on slashdot, only to be rewarded with "STFU Jon" and other such bitchiness. I thought he had some interesting points, even if his opinions were usually different from mine.
And in response to your impending queries: No, I am not Jon Katz. I'm just posting anonymously because I know my opinion here doesn't match everyone else's exactly and I'd rather not take the karma hit. Oh, yeah, and I'm wandering off topic too.
You bet that legacy plays a role. (Score:5, Interesting)
I used to do client/server programming at a health care provider that employed over 20,000 people. The few apps that used Oracle were completely insigificant - EVERYTHING was on the AS/400. And they had a lot of AS/400's. In fact, they were buying MORE AS/400's. They were even planning on spending millions of dollars on a few very large AS/400's to replace several of the smaller AS/400's.
Why in the world would they still be using something so ancient? Legacy, man. "Back in the day", they started using AS/400's, and since everything was running on them, they just kept getting more and more of them. I'm sure that they're not the only ones that keep pumping millions of dollars per year into "Big Blue"'s coffers just because the idea of switching over is too daunting.
Of course, at the company I presently work for, we've done all of our CGI programming in Perl. We haven't found any reason to switch to anything else, and likely never will - but even if we did, we still probably wouldn't. It's taken YEARS of our entire programming team working like feral weasels to produce the programming we have. Just picking it up and migrating would take at least as long. If you look at the number of programmers, taking 4 years of their time to reprogram everything would cost them nearly a million dollars. The scary part? A million bucks is NOTHING compared to the market share we'd lose if we just took 4 years off from improving our product.
Yeah, legacy has a lot more power than most people realize.
steve
Re:You bet that legacy plays a role. (Score:4, Informative)
AS/400 has been fully 64 bit for 6 years. AS/400's database has had working cost-based optimization forever (something oracle still struggles with). AS/400 has had mainframe-like LPAR before the mainframe had it. AS/400 can scale to 24 CPUs and so much RAM and disk it would make your head swim. It's got dedicated I/O processors for handling disk and in many cases can out-benchmark a mainframe in sheer I/O capability. It's got a native java runtime that maintains native executables without destroying the bytecode.
You are uninformed. Your AS/400s sitting right beneath your nose are the most advanced servers in your company hands down. Legacy certainly has power, but AS/400 is not ancient any more than stonehenge is new.
Re:gah (Score:2, Troll)
Re:Card Readers (Score:3, Funny)
You set up a VM guest with the CP operating system, download your linux kernel, parmfile, and ramdisk images, chop them into 80 byte blocks (the old Hollerith card had 80 columns), feed them into a virtual card punch, then into a virtual reader, and ipl the reader. I about fell out of my chair.
It also made me laugh when my mainframe-guy lab partner complained about vi being archaic.
"Dude. You just chopped my kernel into 80 byte blocks and fed it into a virtual card reader... don't talk to me about archaic."