Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Cloud Google Hardware

The Eternal Mainframe 225

theodp writes "In his latest essay, Rudolf Winestock argues that the movement to replace the mainframe has re-invented the mainframe, as well as the reason why people wanted to get rid of mainframes in the first place. 'The modern server farm looks like those first computer rooms,' Winestock writes. 'Row after row of metal frames (excuse me—racks) bearing computer modules in a room that's packed with cables and extra ventilation ducts. Just like mainframes. Server farms have multiple redundant CPUs, memory, disks, and network connections. Just like mainframes. The rooms that house these server farms are typically not open even to many people in the same organization, but only to dedicated operations teams. Just like mainframes.' And with terabytes of data sitting in servers begging to be monetized by business and scrutinized by government, Winestock warns that the New Boss is worse than the Old Boss. So, what does this mean for the future of fully functional, general purpose, standalone computers? 'Offline computer use frustrates the march of progress,' says Winestock. 'If offline use becomes uncommon, then the great and the good will ask: "What are [you] hiding? Are you making kiddie porn? Laundering money? Spreading hate? Do you want the terrorists to win?"'"
This discussion has been archived. No new comments can be posted.

The Eternal Mainframe

Comments Filter:
  • by mbone ( 558574 ) on Sunday April 21, 2013 @08:58AM (#43508939)

    He is wrong, on pretty much every level, even the visual.

  • Re:Deep (Score:5, Interesting)

    by Trepidity ( 597 ) <delirium-slashdot@@@hackish...org> on Sunday April 21, 2013 @09:23AM (#43509037)

    I agree these are all differences for a regular pile of VMs in a server room, but if you look at some of the more developed server farms, they do have a lot of the mainframe-like features, at least on the software side. Google, for example, has pretty full-featured job control layered on top of their server farm.

  • Giving up the dream (Score:3, Interesting)

    by Anonymous Coward on Sunday April 21, 2013 @09:35AM (#43509065)

    There was a time when we expected computers to become so easy that everyone could use them. We've given up that dream. Now it's all "managed" again. There are admins and users again, and the admins (or their bosses) decide what the users can do and how. Computing is no longer done with a device you own but a service that someone else provides to you. Yes, you still pay for a device, but that's merely an advanced terminal.

    I blame the users. If they bothered to learn even a little about how things work, they wouldn't give up their freedom so easily. The complacency is staggering. Even people whose job depends on being able to efficiently work with computers often perform repetitive tasks manually instead of learning how to use more of the program they're working with. Of course, with users like that, who refuse to learn how to use what capabilities are already at their disposal, there's a market for the simplest automation performed as a service.

  • by tarpitcod ( 822436 ) on Sunday April 21, 2013 @09:39AM (#43509085)

    Back in the earlier days of micros it was loads of fun. BYTE was a great read. People wrote their own stuff on their own hardware. There were really fascinating choices in CPU's. Initially there were people using 2650's 8080's, 6502's, 6800's, LSI-11's, 1802's, 9900's. .

    I can't remember the last time when someone actually said something outrageous like 'What architecture would be ideal'. Nowadays it's 'What software layer (implicitly running on x86 Linux boxes) should we use?'

    The performance numbers people talk about are terrible too. Kids who just graduated think 100K interrupts per second is 'good!' on a multi Ghz multicore processor. They just have no context and don't understand how absolutely crappy that is and that even on an 8031 running at 11 Mhz with a /12 clock we could pull off > 20K interrupts per second in an ISR written in HLL!

  • by Bill_the_Engineer ( 772575 ) on Sunday April 21, 2013 @10:37AM (#43509307)

    Someone in the industry realizes that computing is really iterative and what's old will eventually become new again.

    I believe the origin of this periodic realizations is as follows:
    (I intentionally used "jargon" instead of "technique", since the need to create a new term doesn't seem proportional to the actual change in implementation)

    1. A college fresh out get hired at a I.T. farm armed with a new set of computing jargon that impresses human resources.
    2. He applies his version of how things should work to the current workplace and things progress well.
    3. Over the next few years the department grows and new hires are brought in to help meet demand.
    4. The new hires start preaching their version of computing jargon that was created by academia to publish a paper.
    5. The once college fresh out comes to the realization that the new computing jargon are practically synonyms for the previous generation's jargon.
    6. The new hire proceeds to step #1 and the circle of I.T. begins anew.

    The neat thing about this iterative process is that the difference in implementation of the jargon between generation N and N - 1 are small enough to not seem that much different. However the difference in implementation of jargon between the current generation and the people hired 5 to 10 cycles prior can and usually are dramatic.

    I entered the field when distributive computing and storage with localized networks were being created and evangelized. Scientific computing had to be performed at universities and anything serious had to be done by renting time on a supercomputer connected via the internet. Medium sized businesses had to rent time on mainframes to perform payroll or hired firms specializing in payroll which still exists today. Small businesses had no access to computing until personal computers and single user applications came into use. Because of the newer businesses being more familiar with distributive computing than centralized computing, they scaled personal computers up to meet the new demands. This ability to scale computing power up allows the company to grow the computing infrastructure as needed. This was not possible with mainframes. Eventually the company grows to the point that it needs to have their data and application centralized and use data centers to handle the load.

    If you step back and look solely at the physical structure (e.g. data center, clerical offices) it resembles the centralized computing from 50 years ago. However if you look at the actual data and computing flow you'll see that its a hybrid of central and distributed computing that was not imagined in the past 20 years. It's more fractal in nature. Your computing at any given moment can be centralized to your terminal, your home, your office, your department, your company, or even global (e.g. Google, Github).

    I declare this to be known as BTE's law. ;)

  • Re:Deep (Score:5, Interesting)

    by Ken Hall ( 40554 ) on Sunday April 21, 2013 @10:50AM (#43509365)

    I work with mainframes for a living. Specifically, I work with Linux on IBM zSeries mainframe for a bank. The idea is the provide the software depth of Linux with the reliability of the zSeries hardware.

    We get a fair amount of resistance from the Lintel bigots, mostly those who still think of the mainframe in 1980's terms. The current generation of mainframe packs a LOT of horsepower, particularly I/O capacity, in a relatively small box. It connects to the same storage and network as the Lintel servers do, but can one of those do 256 simultaneous DMA transfers? We don't sell the platform as a solution for everything, but we've done the TCO math and we're not that different from an Intel server farm once you factor in the external costs.

    I periodically give a class to the Linux admins on the mainframe in general, Linux on z, and the differences between that and Linux on Intel. If you didn't know where to look, it would take you a while to figure out you're not on Intel anymore. Most of the attendees are surprised at what the current boxes are like.

    This is not your fathers mainframe.

  • Re:Deep (Score:4, Interesting)

    by ArsonSmith ( 13997 ) on Sunday April 21, 2013 @11:15AM (#43509513) Journal

    I build server farms specifically to suck data out of Mainframes and process it specifically because of the cost difference. It is nearly 100x the cost and still takes 10x longer to crunch, index and search 8PB of data on mainframe as it does in a comparatively free Hadoop cluster. The TCO was laughably different.

  • by meta-monkey ( 321000 ) on Sunday April 21, 2013 @11:44AM (#43509693) Journal

    I think you're confusing Big Data with big, data-reliant companies.

    Banks are OLTP, and require perfect accuracy, [large number] 9s uptime, fast response, dealing with one record at a time.

    Big Data is OLAP, and can sacrifice some speed, accuracy and uptime to operate over millions and millions of records.

  • by tarpitcod ( 822436 ) on Sunday April 21, 2013 @12:19PM (#43509921)

    Try finding out yourself. Ask some kids some simple questions to the new kids:

    Try asking them:
    What's the memory bandwidth of that x86 desktop or laptop roughly? Special points if they break out cache.
    Ask them how many dhrystone MIPS (very roughly) that uP has.
    Ask them the ratio of the main system memory bandwidth to MIPS.
    Ask them the ratio of the main system memory bandwidth to the I/O storage they have.

    They just never get exposed to this stuff. They just have no reference. Now ask them to compare them even to a regular 286 era ISA bus PC: I'll even give you some numbers.

    286/16 ~ 4K dhrystone MIPS on a good day
    Disk (40 MB IDE on ISA) ~ 400K/sec

  • by Anonymous Coward on Sunday April 21, 2013 @05:01PM (#43511673)

    "Google, for example, has pretty full-featured job control layered on top of their server farm."

    Google has never cared about errors.

    Actually, Google cares about errors so much that they have invented some nice error correction techniques. For example, in Hadoop and the original Google's equivalent system, computers are allowed to just break in the middle of calculations. All that is transparent to the high level code that runs on the system; the program gets executed normally, perhaps with some delays but otherwise uninterrupted at all.

The one day you'd sell your soul for something, souls are a glut.

Working...