Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Cloud Google Hardware

The Eternal Mainframe 225

theodp writes "In his latest essay, Rudolf Winestock argues that the movement to replace the mainframe has re-invented the mainframe, as well as the reason why people wanted to get rid of mainframes in the first place. 'The modern server farm looks like those first computer rooms,' Winestock writes. 'Row after row of metal frames (excuse me—racks) bearing computer modules in a room that's packed with cables and extra ventilation ducts. Just like mainframes. Server farms have multiple redundant CPUs, memory, disks, and network connections. Just like mainframes. The rooms that house these server farms are typically not open even to many people in the same organization, but only to dedicated operations teams. Just like mainframes.' And with terabytes of data sitting in servers begging to be monetized by business and scrutinized by government, Winestock warns that the New Boss is worse than the Old Boss. So, what does this mean for the future of fully functional, general purpose, standalone computers? 'Offline computer use frustrates the march of progress,' says Winestock. 'If offline use becomes uncommon, then the great and the good will ask: "What are [you] hiding? Are you making kiddie porn? Laundering money? Spreading hate? Do you want the terrorists to win?"'"
This discussion has been archived. No new comments can be posted.

The Eternal Mainframe

Comments Filter:
  • Deep (Score:3, Insightful)

    by roman_mir ( 125474 ) on Sunday April 21, 2013 @07:40AM (#43508903) Homepage Journal

    Wow, so deep. Computer is the Internet, Internet is the computer.

    Mainframes are specialised equipment, server farms are almost generic computers with redundancies. The real difference is the cost. Today's server farms would cost many factors more if they were built with specialised mainframes, there is no other real difference, they are really there for the same purpose.

    • Re:Deep (Score:5, Informative)

      by tarpitcod ( 822436 ) on Sunday April 21, 2013 @07:54AM (#43508935)

      Right and there are some big differences:

      Mainframe CPU's tend to have far more error detection and correction. They have safeguards against errors in data shuffling and computation inside the CPU itself. Mainframes tend to offer robust job control, by the time you add decent job control of the level that mainframes offer your network of workstations/servers starts getting complicated
      Mainframes tend to offer decent encryption and security.

      Can you do all these things on a pile of VM's? Sure. Is it cheaper - maybe. Is it fun to manage - not particularly.

      For the point about giving everyone access to all your stuff? Let's see the author prove his point by posting all his personal details, address age, credit card numbers, ssn, medical records, tax returns and let's see how that works out for them..

      • Re:Deep (Score:5, Interesting)

        by Trepidity ( 597 ) <[delirium-slashdot] [at] [hackish.org]> on Sunday April 21, 2013 @08:23AM (#43509037)

        I agree these are all differences for a regular pile of VMs in a server room, but if you look at some of the more developed server farms, they do have a lot of the mainframe-like features, at least on the software side. Google, for example, has pretty full-featured job control layered on top of their server farm.

        • by Anonymous Coward on Sunday April 21, 2013 @10:15AM (#43509511)

          "Google, for example, has pretty full-featured job control layered on top of their server farm."

          Google has never cared about errors.

          Who gives a damn if what absolutely positively SHOULD have been the very first result is instead the fourth or the fifth result, or if it appears on page two of the results, or if it somehow magically disappears into the ether because commodity server #XJ42 in rack #43HB on aisle #521JJ in column #447F in building #QQZ1 in server farm #H61M happened to have crashed just as the query response was being assembled?

          Especially if the query involved "Justin Bieber", "Lindsay Lohan", or "Natalie Portman Hot Grits".

          IBM, on the other hand, has always cared about errors - has always, in fact, been FANATICAL about errors.

          If you send a query to an IBM mainframe, then you're expecting umpteen-sigmas of confidence that the mainframe will actually be up and running, that you'll get an actual response, and that the response, when it finally arrives, will be 100% CORRECT.

          Especially when the response is something along the lines of "DANGER: CHILD KNOWN TO BE ALLERGIC TO AMOXICILLIN. ALLERGIC RESPONSE INCLUDES ANAPHYLACTIC SHOCK. PRESCRIPTION REQUEST THEREFORE INVALID AND REFUSED."

          • As far as I know - Google isn't running prescription services on their search engine servers. Do you know something that I don't?

            Are you suggesting that all medical services are run on IBM servers? I find that a bit hard to believe. Those hospitals and doctor's offices that I've been in seem to run Windows on Dell machines, almost exclusively. I've seen a couple of computers with something that looked like it might be Solaris, but I didn't actually have access to the machines, and couldn't investigate.

            I

          • by santiago ( 42242 ) on Sunday April 21, 2013 @02:55PM (#43511281)

            You have no idea what you're talking about. Dropping the occasional search result is fine, but what about failing to record billing for the ad system, dropping mails you were supposed to receive in your Gmail account, or failing to save the doc you were editing? Google does a lot more than serve search results, and most of that needs to work every single time.

            The fact of the matter is that even the most expensive hardware eventually fails, so your software needs to be able to deal with it and fall back to working units. Once you've written your software to handle hardware failures, you can run on really cheap hardware. And, it turns out that buying a lot of really cheap computers some of which are broken all the time gets you way more computing power than trying to buy a few really robust machines.

      • Re:Deep (Score:5, Interesting)

        by Ken Hall ( 40554 ) on Sunday April 21, 2013 @09:50AM (#43509365)

        I work with mainframes for a living. Specifically, I work with Linux on IBM zSeries mainframe for a bank. The idea is the provide the software depth of Linux with the reliability of the zSeries hardware.

        We get a fair amount of resistance from the Lintel bigots, mostly those who still think of the mainframe in 1980's terms. The current generation of mainframe packs a LOT of horsepower, particularly I/O capacity, in a relatively small box. It connects to the same storage and network as the Lintel servers do, but can one of those do 256 simultaneous DMA transfers? We don't sell the platform as a solution for everything, but we've done the TCO math and we're not that different from an Intel server farm once you factor in the external costs.

        I periodically give a class to the Linux admins on the mainframe in general, Linux on z, and the differences between that and Linux on Intel. If you didn't know where to look, it would take you a while to figure out you're not on Intel anymore. Most of the attendees are surprised at what the current boxes are like.

        This is not your fathers mainframe.

        • Re:Deep (Score:4, Interesting)

          by ArsonSmith ( 13997 ) on Sunday April 21, 2013 @10:15AM (#43509513) Journal

          I build server farms specifically to suck data out of Mainframes and process it specifically because of the cost difference. It is nearly 100x the cost and still takes 10x longer to crunch, index and search 8PB of data on mainframe as it does in a comparatively free Hadoop cluster. The TCO was laughably different.

          • You haven't tried the IBM kool-aid yet. Those people whose jobs currently rely on mainframe expertise are very happy with them. They do have better error-checking but everything else is at least an order of magnitude out of whack with commodity hardware price/performance, and in many cases, several orders. You can reduce some of the costs on their zSeries by buying specialised processors for DB2, Java, and Linux (~100K a pop) so you don't have to may for MIPS usage but the costs are still astronomical for t

            • by Lennie ( 16154 )

              AWS compute at least needs to run customer VMs, don't you think these people would like to be able to run their existing x86, euh amd64 applications ?

              Google or maybe even Facebook would be a much better example, they have their own applications with source code which they can compile for the platform of their choice.

              People currently seem more interrested in ARM processors than mainframes.

            • by Dcnjoe60 ( 682885 ) on Sunday April 21, 2013 @11:54AM (#43510205)

              You haven't tried the IBM kool-aid yet. Those people whose jobs currently rely on mainframe expertise are very happy with them. They do have better error-checking but everything else is at least an order of magnitude out of whack with commodity hardware price/performance, and in many cases, several orders. You can reduce some of the costs on their zSeries by buying specialised processors for DB2, Java, and Linux (~100K a pop) so you don't have to may for MIPS usage but the costs are still astronomical for the performance. If it was cost effective, don't you think Amazon would be running its cloud services on them?

              The last TCO I was involved with actually showed that the mainframe was the more cost effective approach for the use case at hand.

              As for Amazon, well that is hard to say. If when they first started, they knew how successful they were going to grow and how quickly, maybe they would have gone with a mainframe solution.That's the nice thing of TCO analysis, it eliminates, or should eminiate, any platform bias the decision makers have. Then again, it also depends on really knowing what future growth patterns and expected use cases are or it is just more GIGO.

          • Undoubtedly IBM is working to "solve" that problem. By amending their contracts so they can charge for every byte transmitted into and out of their box.

        • by CAIMLAS ( 41445 )

          It doesn't matter if the Lintel/Wintel/etc. machines can't do 256 DMA transfers sym.

          Chances are your vaunted NetApp or whatever Enterprise SAN storage can't do half that, either. :)

          Cool feature and all, and yes, the hardware is impressive as hell. But that's not the problem, this is:

          * Support cost (in both equipment, staffing, research, etc.)
          * Vendor dependence
          * Overall equipment cost

          Run the numbers however you like to justify it. Nobody knows mainframes anymore, not anyone with non-legacy-support resumes.

    • Comment removed based on user account deletion
      • Re:Deep (Score:4, Insightful)

        by swalve ( 1980968 ) on Sunday April 21, 2013 @08:58AM (#43509153)
        That stuff is also in hardware, which is only beginning to happen in the commodity pc world.

        For a certain type of workload, at a certain level of necessary uptime, mainframes start becoming cost effective. Fun things like where IBM will install as many CPUs as you want, but only charge you for their time when you use them. This can be very cost effective for businesses with seasonal volume shifts. At some point, paying IBM $1000 an hour for their support is cheaper than paying 20 creeps with greasy hair to change hard drives, stack servers into a rack and fuck up the rollout of new VMs. It's kind of like trucks versus trains. Each have their place, but neither is very good at emulating the upsides of the other.
      • You can get the same precision and fault tolerance by using commodity hardware by running multiple jobs in parallel, but it's rarely required.

        • It's rarely required, until it is.

          • by DarkOx ( 621550 )

            That and its rarely done because frankly the typical LOB software developer does not know how to implement such things. Lets face it even the cheapest hardware is so good most of the time they don't need to. Its also true they should not have to. By the time someone is writing x = x * y; in Java or even C, other than being sensitive to data-type, will it overflow? is a float that is going to have precision truncated? etc; they ought to be able to depend on that working as expected.
            The right place to deal

            • What's worse is that they don't understand floating point.

              They don't understand that in floating point you can totally have a situation where:

              d = a + b + c != d= b + c + a

        • You can get the same precision and fault tolerance by using commodity hardware by running multiple jobs in parallel, but it's rarely required.

          It also rarely makes sense. If the parallel instances are running the same software, they will likely both make the same error, since 99.9% of reliability issues are in the software not the hardware. If you spend a million dollars on more robust hardware, and a million dollars on extra software testing (unit, integration and (especially) usability), the latter is orders of magnitude more likely to prevent a problem.

          • You can get the same precision and fault tolerance by using commodity hardware by running multiple jobs in parallel, but it's rarely required.

            It also rarely makes sense. If the parallel instances are running the same software, they will likely both make the same error, since 99.9% of reliability issues are in the software not the hardware. If you spend a million dollars on more robust hardware, and a million dollars on extra software testing (unit, integration and (especially) usability), the latter is orders of magnitude more likely to prevent a problem.

            That would only be true if you never changed the software once you spent your million dollars and tested the software. How likely is that. In reality, there is and always will be bad code out there. So, you can spend extra dollars in extra testing every time you write or change code or you can test less and rely on more costly but fault tolerant hardware.

            Face it, there is a reason that most financial instituions still use mainframes. There is also no doubt that their boards of directors want to maximize the

    • From a technical perspective, big difference. From a business perspective, not so much. The business side doesn't care about just how the technology is built. What matters is that mainframes and server farms are a black box in a company-controlled office built with company-controlled hardware where vast amounts of data are stored and processed. Centralisation and specialisation.

      • Don't forget the low-cost dumb terminals – I'm sorry: "thin clients" – which are incapable of doing anything at all independently of the centrally-adminstered silicon. The computing environment I work in today is architecturally very similar to the one I started working in back in the mid-1980s.
        • Ever check the cost of those 'low-cost' IBM terminals?

        • Don't forget the low-cost dumb terminals – I'm sorry: "thin clients" – which are incapable of doing anything at all independently of the centrally-adminstered silicon. The computing environment I work in today is architecturally very similar to the one I started working in back in the mid-1980s.

          How true is that! Today's computers have so much computational power and for the most part they are being used as dumb terminals. What a waste. The PC was supposed to free us from the confines of a data center that had control of our data. Pre-internet, that looked like it was happening. Now, though, instead of advancing, we've regressed and it is 1984 all over again, except today it is a browser instead of TN3270.

    • Mainframes aren't so "specialized". Maybe you are confusing Mainframes with Supercomputers which tend to be much more specialized and focused towards scientific and research usage.

      I worked on IBM big iron back in the day and a "mainframe" can run Linux Partitions as well as other mainframe OS's. Unix boxes aren't so generic either. A unix box running Linux is different than a Unix box running HP-UX or Solaris and requires some different sys-admin skills. There are other issues with shared library linkin

      • Is running Linux or Solaris really all that different?

        My resume has a long list of Unix type operating systems on it. With all of them, I see the common features, each with its eccentricities. The same can be said of only Linux.

        Set an IP on an interface. Some want /etc/sysconfig/network-scripts/ifcfg-* . Some want an entry in /etc/rc.d/rc.inet1.conf. Some want it written directly to /etc/rc.d/rc.inet1.

        I was writing a script to get information from a couple

    • by mikael ( 484 )

      That's the difference - the traditional mainframe was a one vendor product - racks, disk drives, CPU's, network boards, cables, terminals, everything available from the one supplier at "special" corporate rates, providing that you gave them the exclusive maintenance contract. Want printed system manuals? We'll charge you for that. Want more than eight user accounts? That cost extra too. Need a compiler for OS development work? That'll cost extra. Want the pre-compiled development API's to write applications

      • by tepples ( 727027 )

        Need a compiler for OS development work? That'll cost extra. Want the pre-compiled development API's to write applications? That'll cost more too.

        That's little different from Apple, who charges $99 to $299 per year for the right to run self-compiled applications on your own iDevice.

  • by Anonymous Coward on Sunday April 21, 2013 @07:42AM (#43508911)

    Just like the mainframe. [wikipedia.org]

  • Privacy (Score:5, Insightful)

    by MLBs ( 2637825 ) on Sunday April 21, 2013 @07:44AM (#43508913)
    It's the usual argument. If you have something to hide, you're probably a bad person.
    That "may" be true if the authorities are not abusing their power, or trying to gain more power than the people want them to have.
    As soon as you have even a potentially oppressive regime, privacy becomes essential.
    • That "may" be true if the authorities are not abusing their power

      Oh, they are. It's just more subtle, I think, than in past attempts. The rise of "social media" like Facebook has indoctrinated an entire generation that "sharing is good and healthy, hiding (meaning: privacy) is bad and unhealthy" and you get laughed at for wanting privacy at best, dirty looks and attacked/accused of outrageous things at worst. The sad part is they don't realize what it is they've given up until it's too late to do anything about it. I'd have a hard time believing that there are not people

    • by Voline ( 207517 )

      I think you're misreading the article. The Winestock is not making the "if you have something to hide ..." argument, he's anticipating it. His argument is that the computer industry, and perhaps computing as a technical endeavor, tends the direction of centralization of computing power and grunt work which then leads to centralization of data. Both governments and business – even cool, supposedly "revolutionary" businesses – like it this way. So, don't look to the high tech companies [digitaltrends.com] for help p

  • by mbone ( 558574 ) on Sunday April 21, 2013 @07:58AM (#43508939)

    He is wrong, on pretty much every level, even the visual.

  • by h2oliu ( 38090 ) on Sunday April 21, 2013 @08:06AM (#43508975)

    One of the points I found the most insightful is that the geeks don't like to take the time to make things work anymore. I remember a colleague saying that there was no better way to kill a hobby than to get it as a job.

    The days of tweaking the OS and hardware as a common practice among the majority of geeks is gone. The field is too broad now. You have to pick which stack, and where on it, you want to hack.

    • by tarpitcod ( 822436 ) on Sunday April 21, 2013 @08:39AM (#43509085)

      Back in the earlier days of micros it was loads of fun. BYTE was a great read. People wrote their own stuff on their own hardware. There were really fascinating choices in CPU's. Initially there were people using 2650's 8080's, 6502's, 6800's, LSI-11's, 1802's, 9900's. .

      I can't remember the last time when someone actually said something outrageous like 'What architecture would be ideal'. Nowadays it's 'What software layer (implicitly running on x86 Linux boxes) should we use?'

      The performance numbers people talk about are terrible too. Kids who just graduated think 100K interrupts per second is 'good!' on a multi Ghz multicore processor. They just have no context and don't understand how absolutely crappy that is and that even on an 8031 running at 11 Mhz with a /12 clock we could pull off > 20K interrupts per second in an ISR written in HLL!

      • That sounds too simple, there must be a reason for it other than the new youngsters suck at programming compared to the older generation.

        • by tarpitcod ( 822436 ) on Sunday April 21, 2013 @11:19AM (#43509921)

          Try finding out yourself. Ask some kids some simple questions to the new kids:

          Try asking them:
          What's the memory bandwidth of that x86 desktop or laptop roughly? Special points if they break out cache.
          Ask them how many dhrystone MIPS (very roughly) that uP has.
          Ask them the ratio of the main system memory bandwidth to MIPS.
          Ask them the ratio of the main system memory bandwidth to the I/O storage they have.

          They just never get exposed to this stuff. They just have no reference. Now ask them to compare them even to a regular 286 era ISA bus PC: I'll even give you some numbers.

          286/16 ~ 4K dhrystone MIPS on a good day
          Disk (40 MB IDE on ISA) ~ 400K/sec

          • And their answer to every one of your questions is: WAY more that I even need! And this is a USED box!

          • As one of those damn kids, let me try:
            Let's see... it's a dual-channel DDR3 memory controller, so that's 128 bits per transfer. DDR, so two transfers per clock. And clocked at 1333MHz, so 341,248 Mb/s, or 42,656 MB/s (I'll call it 42GB/s for short). Cache I'd have to look up, but I think L1 and L2 caches are synchronous to CPU clock, while L3 is running at half-clock. L1 I think reads in 256-bit cache lines, not sure about any of the others.

            I personally have never needed to use dhrystone - I'm one of those

      • by mikael ( 484 ) on Sunday April 21, 2013 @10:39AM (#43509661)

        And if you know where to look, you can find the whole collection of magazines scanned and available online (http://atariage.com/forums/topic/167235-byte-magazine/)
        The best issues where when they had geek cartoons or photographs of real hardware on the front cover. The real change was when everything went all pastel shaded with the little bod characters in suits. I guess that coincided with the shift from hardware projects to software API programming on personal computers.

        • It's great to see these things still around. They are really fun to read. I actually am a bit of an atari 8 bit fan with some 8 bits I still use occasionally for fun.

    • The days of tweaking the OS and hardware as a common practice among the majority of geeks is gone. The field is too broad now. You have to pick which stack, and where on it, you want to hack.

      I tried to git clone the android-x86 repo per the instructions and it just never completed and kept dying, so I wound up with a 21 GB .git directory and nothing else. The people who have the bandwidth are buying new hardware and expect it to just work. The people with the old hardware don't have the bandwidth. Back "in the day" your whole OS and all the sources would fit on a stack of floppies or on one CD with room left over and you could reasonably download a new OS via POTS MODEM. Today, I literally cann

    • by CAIMLAS ( 41445 )

      The reasons why people don't 'hack' their stuff anymore is because:

      * They're working 60+ hour weeks and don't have the time
      * The people who used to are now adults, with responsibilities outside of work, and don't have the time
      * Kids these days aren't really all that interested, unless we're talking about mobile handsets (aka smartphones), which DO get 'hacked' a lot.
      * There's usually no point in making small scale changes. Shit is fast enough now; you don't see a 20% increase in performance by 'tweaking' th

  • by i ( 8254 ) on Sunday April 21, 2013 @08:12AM (#43508991)

    ..that have very big amounts of data, complex data structures and can't afford any errors (especially data corruption) caused by hardware limitations.

    Banks is an example.

    • Big data is more readily done with racks of commodity hardware. You get orders of magnitude better performance for the money. Do you seee any of the big web companies moving to mainframes? If there was cost or performance improvements in it they'd have done it in a second.

    • by fermion ( 181285 )
      I have seen large complex data sets on racks of cheap microcomputers in places wehre i work. We see this in Google, for example. What characterizes these data sets is that are easily replicated, or there is little liability if there is loss. Think about data loss on google and then think about a bank misplacing a deposit. Do we think that Google keeps many of it's algorithms secret for no reason? No, they do it so they are not held accountable.

      For servers facing the internet, load balancers, like tho

    • And apparently are is quite dead... :/

  • We all need a good look at it. Does it look ridiculous to everybody? Good. Now let's move on to things that might actually happen.

    Server farms will offload much of the computer power and most people will use lightweight, low power portable devices? Yeah probably.
    Server farms will get bigger and more powerful? Definitely.
    That model will fit for every business and organization and individual user? No way. Won't happen.

    Please keep in mind that my 3 year old Android phone is more powerful than any P

  • by div_2n ( 525075 ) on Sunday April 21, 2013 @08:17AM (#43509005)

    I suppose if you stand back from about 3 miles and never bother to understand the underlying architcture and how it scales while ignoring the flexibility of server farms as opposed to very much a box that mainframes put you in (with very minor flexibility) then yeah -- they're exactly the same.

    It's easy to draw parallels between general functionality, but you have to reduce it to "a series of tubes" type descriptions to get there.

    • the most important of the world's business has always been done by mainframes, most of your money is information in a network of mainframes.

      • It's purely because that's what was available when the systems were originally made and it's still hugely expensive to replace those systems. Many banks have and are enjoying cost savings, but they needed to bite the bullet and convert from difficult to maintain COBOL systems. Besides the cost, banks are also averse to risk, and change causes risk.

        • by cellocgw ( 617879 ) <cellocgw&gmail,com> on Sunday April 21, 2013 @09:24AM (#43509251) Journal

          Besides the cost, banks are also averse to risk, and change causes risk.

          Wait a minute: did you somehow sleep through 2008? Banks love risk, so long as it's someone else's money they're churning.

        • no, you have some misconceptions. there is no such need and no cost savings and to move away from mainframes. You assume a mainfrme must be running COBOL.

          Mainframes run modern software. They run it extremely cost effectively for the throughput they have, moreso than any other platform. The run it with extreme reliability and uptime. They run modern DBMS, they run enterprise java and all other modern languages, they can and do run Linux and Linux business apps. they can run x86 software on X blades.

    • by Anonymous Coward

      The general thinking of comparing the two is that both systems are the ones running running the show, storing the data, and being accessed by dumb-clients that only serve as terminals.

      Obviously server farms and mainframes are very different from a back-end technology standpoint, but from a viewpoint of the user they are identical in every single way. You log in with your user specific credentials, you do your work using the server's processing power and save your work in the servers storage medium. Your cli

    • by Lennie ( 16154 )

      While I agree with you on many points.

      It is possible serverrooms are going to look very different in the coming years:

      http://www.datacenterknowledge.com/archives/2013/01/22/silicon-photonics-the-data-center-at-light-speed/ [datacenterknowledge.com]
      http://www.wired.com/gadgetlab/2010/07/silicon-photonics-50-gbps/ [wired.com]
      http://www.opencompute.org/ocp-summit-iv-videos/ [opencompute.org]

    • I didn't get the same meaning as you did. My reading is that the mainframe and server farms are the same in that they centralize information. Giving the corporations access to and control of much of your personal data. That's something that we had begun to move away from with the rise of the personal computer but the move to the cloud is going back in the other direction. I don't think that's going to stop because the cloud and server farms provide the user some great benefits but it's worthwhile to keep i
  • No because (Score:5, Insightful)

    by Anonymous Coward on Sunday April 21, 2013 @08:24AM (#43509039)

    Are you making kiddie porn? Laundering money? Spreading hate? Do you want the terrorists to win?

    Because I don't want every goddamn marketer out there trying to sell me their shit. I don't want to have to deal some horseshit like this [forbes.com] because businesses feel entitled to stick their noses into my business.

    No, you are NOT offering me "convenience" - you are prying.

    As it is, I CAN create a dossier that would make an East German Stazi agent cream his pants by just hitting the credit bureaus, Google, ChoicePoint, ISPs, Cell phone companies, and every other business entity out there that has this need to collect consumer data.

    Something to hide?

    Well, just ask the atheist, gay or lesbian, peace protestor or Muslim who has their identity known what happens to them.

    The uncle of the Marathon bombers who had his face plastered all over the place is headed for some serious shit. You just know that folks are going to vandalize his house, harass him, and give him a lot of shit just because he's related to those kids and a Muslim.

    People are hateful, ignorant, cruel, shallow and just stupid - until proven otherwise. Therefore, it is imperative to keep one's secrets.

  • Giving up the dream (Score:3, Interesting)

    by Anonymous Coward on Sunday April 21, 2013 @08:35AM (#43509065)

    There was a time when we expected computers to become so easy that everyone could use them. We've given up that dream. Now it's all "managed" again. There are admins and users again, and the admins (or their bosses) decide what the users can do and how. Computing is no longer done with a device you own but a service that someone else provides to you. Yes, you still pay for a device, but that's merely an advanced terminal.

    I blame the users. If they bothered to learn even a little about how things work, they wouldn't give up their freedom so easily. The complacency is staggering. Even people whose job depends on being able to efficiently work with computers often perform repetitive tasks manually instead of learning how to use more of the program they're working with. Of course, with users like that, who refuse to learn how to use what capabilities are already at their disposal, there's a market for the simplest automation performed as a service.

    • I blame the users. If they bothered to learn even a little about how things work, they wouldn't give up their freedom so easily. The complacency is staggering. Even people whose job depends on being able to efficiently work with computers often perform repetitive tasks manually instead of learning how to use more of the program they're working with. Of course, with users like that, who refuse to learn how to use what capabilities are already at their disposal, there's a market for the simplest automation performed as a service.

      OK, so the Eternal Mainframe meets the Eternal Summer?

    • by CAIMLAS ( 41445 )

      We've given up that's because that dream was unrealistic. It wasn't even so much a dream as it was a marketing campaign from Apple and Microsoft, and it was long before the concept of a global computer network accessible from every device was even a glimmer in the conceivers' eyes.

      You also seem to be missing the point that pretty much everyone has a smartphone and/or a computer these days, and that they use them and do things with them which were wildly impossible when that dream was commonplace. That dream

  • by bryan1945 ( 301828 ) on Sunday April 21, 2013 @09:15AM (#43509215) Journal

    Not networked, networked, not, networked, on and on. Each cycle begets a new cycle. Now it's just called "the cloud."

    • nonsense, mainframe have been networked for decades. they can do "cloud computing". they've never gone away, they run all modern languages, dbms, and can even run Linux. now they even have expansion chassis that can take x86 blades for softwares that can't run on Z.

    • Well, there's a reason for that. MS has this weird idea of taking the fight to other guy on their turf...no matter how expensive it is.

      Case in point: Netscape got its ass handed to it when MS took it on. Why? Because MS owned the last mile, and could afford to give away a browser for free. That hurt Netscape. Then Netscape starts loses control on the server side of things. Boom. How did MS win? By having Netscape fight on MS's turf: Windows.

      Case in point: MS wants to replace Google with Bing, Firefox / Chro

      • Uh, no. We, the coders and consultants are making bank, implementing the stupid decisions passed down by management.

        "What's that boss? You wanna cloudify our web-a-spaces? And right after we ported all our offerings to 3 different mobile platforms. So you saw this cloud thing article in the inflight magazine, huh? No problem boss, just keep them pay checks coming..."

  • No shit. Every time I heard someone saying he plans on building a private cloud on his computer, I ask myself why he just doesn't buy a mainframe.

    I mean, not every server farm or server room can be compared to a mainframe. But these days, when companies have VMWare clusters and what-ever clouds, it is impossible not to draw a comparison since, functionally (and sometimes structuraly) they are pretty much like mainframes.

    • no, not functionally like mainframes at all. they are like a network clustered bunch of x86 pc's with shared storage. that's all. Each blade is going to be bottlenecked by its few SAN links, each is ever only going to be able to give a VM at most its full CPU core count and nothing more (can't run a single vm across multiple blades for performance improvement), if a blade fails without warning then HA will take a while to spin up and continue on another blade

    • Take someones description of their cloud computing service and compare it to the concept of the computing utility - like the MULTICS people talked about, it's pretty damn similar.

      Some idiots will claim it's not - that they can get to their data! Which is total garbage because while technically they might be able to get to their data, it sucks to empty a swimming pool via a straw which is what the bandwidth of an internet connection is like when there's a tonne of data in the cloud.

      So then they will say 'ru

  • by Bill_the_Engineer ( 772575 ) on Sunday April 21, 2013 @09:37AM (#43509307)

    Someone in the industry realizes that computing is really iterative and what's old will eventually become new again.

    I believe the origin of this periodic realizations is as follows:
    (I intentionally used "jargon" instead of "technique", since the need to create a new term doesn't seem proportional to the actual change in implementation)

    1. A college fresh out get hired at a I.T. farm armed with a new set of computing jargon that impresses human resources.
    2. He applies his version of how things should work to the current workplace and things progress well.
    3. Over the next few years the department grows and new hires are brought in to help meet demand.
    4. The new hires start preaching their version of computing jargon that was created by academia to publish a paper.
    5. The once college fresh out comes to the realization that the new computing jargon are practically synonyms for the previous generation's jargon.
    6. The new hire proceeds to step #1 and the circle of I.T. begins anew.

    The neat thing about this iterative process is that the difference in implementation of the jargon between generation N and N - 1 are small enough to not seem that much different. However the difference in implementation of jargon between the current generation and the people hired 5 to 10 cycles prior can and usually are dramatic.

    I entered the field when distributive computing and storage with localized networks were being created and evangelized. Scientific computing had to be performed at universities and anything serious had to be done by renting time on a supercomputer connected via the internet. Medium sized businesses had to rent time on mainframes to perform payroll or hired firms specializing in payroll which still exists today. Small businesses had no access to computing until personal computers and single user applications came into use. Because of the newer businesses being more familiar with distributive computing than centralized computing, they scaled personal computers up to meet the new demands. This ability to scale computing power up allows the company to grow the computing infrastructure as needed. This was not possible with mainframes. Eventually the company grows to the point that it needs to have their data and application centralized and use data centers to handle the load.

    If you step back and look solely at the physical structure (e.g. data center, clerical offices) it resembles the centralized computing from 50 years ago. However if you look at the actual data and computing flow you'll see that its a hybrid of central and distributed computing that was not imagined in the past 20 years. It's more fractal in nature. Your computing at any given moment can be centralized to your terminal, your home, your office, your department, your company, or even global (e.g. Google, Github).

    I declare this to be known as BTE's law. ;)

    • This ability to scale computing power [...] was not possible with mainframes

      Are you having a laugh?

      • In context of money required. Small businesses normally couldn't afford one mainframe much less more than one.
      • You are correct that I should be more explicit about the relationship between scalability and expense. Smaller computing platforms can be scaled up in smaller steps requiring less money up front than mainframes.
        • I may have to be more explicit by stating that the computing unit size of small computing platforms is such that allows scalability in power from the end-user up. This scalability is accessible to more people and therefore encourages a change in the field of computing.

          The unit size of large computing platforms (i.e. mainframes) do not encourage such scalability. Multiple mainframes in a data center may increase data processing power at that central location but doesn't encourage a shift of computing power

    • IT needs some kind of an apprenticeship system or at lest more tech schools where you learn from people who have done real work and not so much people working on there academia papers and you have more hands on learning as well.

      • I meant this progression to be a "good thing". One problem with apprenticeships is that you reinforce the established way of doing things. Bringing people in from the outside, especially those who learned from others that read or write academic papers allows new concepts to be integrated with established practice.
        • But CS is not IT and people who do academic papers are type of people in IT who have been in academic for most of there life and have little to no hands on IT work.

          And we don't need more people in IT loaded with academic smarts but little IT book smarts / Little hands on smarts.

          • I'm not so sure that is entirely true. I do not believe data centers (at least the good ones) are complete void of CS people. IT people who maintain an infrastructure do not work inside a vacuum. They are either influenced directly by CS people within their organization, or indirectly by the computing appliances or applications that they maintain.

            I'm afraid a lot of low level techs become scape goats for high level techs that should have known better. I thought we already have tech schools that train low l

  • So, what does this mean for the future of fully functional, general purpose, standalone computers? 'Offline computer use frustrates the march of progress,' says Winestock. 'If offline use becomes uncommon, then the great and the good will ask: "What are [you] hiding? Are you making kiddie porn? Laundering money? Spreading hate? Do you want the terrorists to win?"'

    Almost all of his examples are a complete non-sequitor. How does one launder money, spread hate, help the terrorists, etc. with a computer that i

    • Re: (Score:2, Informative)

      by Anonymous Coward

      Have you read TFA? He's not advocating going back to the mainframe-terminal paradigma. He's warning against what is commonly perceived as the way things will inevitably go. If you're seen as "out of the norm", you're ostracized. This is reality. You don't do what everything does - and is recognized as "good" because that's what everybody does - you're "stranger danger". And in this day and age, being "stranger danger" may be a death sentence.

  • HTML5 interfaces will always suck just the way javascript HTML 4 interfaces suck- you can't take a server hit every time you want to react to a mouse movement or process a keystroke.

    For a large number of apps, this actually doesn't matter but for people who really do creative work with their computer , the UI and a very large amount of processing of local data will have to take place on the local machine.

    I suppose their are entities out there actively plotting the end of personal general purpose PC but to s

  • 'Offline computer use frustrates the march of progress,' says Winestock. 'If offline use becomes uncommon, then the great and the good will ask: "What are [you] hiding? Are you making kiddie porn? Laundering money? Spreading hate? Do you want the terrorists to win?"'"

    Really? I think the Tea Party has found their next candidate for president. Now if only he had a personal life like the "Newt."
  • by Capt.Albatross ( 1301561 ) on Sunday April 21, 2013 @10:03AM (#43509447)

    Mr. Winestock's parallels between server farms and mainframes are reasonable, if unoriginal, and the same can be said for his concerns over privacy and social control. His attempt to claim the former as the causative agent for the latter, however, goes wrong right from the start: 'Mini/micro-computers were supposed to kill the mainframe.'

    Not so. They came about firstly because technological advances made them possible, and also because some smart people realized that they would allow us to do things that, in practice, we could not do before. The pioneers of these developments were not interested in reproducing, much less replacing, mainframe computing.

    Turing showed us that the form of our hardware doesn't dictate what we can do with it. To understand the arc of privacy erosion and social control, we need to examine social history and human nature, not the artifacts of technological advance.

  • Download caps / lag / 3g, 4g, LTE roaming costs will make it very hard to go all back end with your system being just a dumb terminal.

    And with roaming cost that can hit $10-$20+ a meg in Canada (higher in other places). A nice remote desktop at least 1024X786 can burn data fast.

  • by rsilvergun ( 571051 ) on Sunday April 21, 2013 @11:03AM (#43509823)
    being abused by gov't. I don't think it really matters. Online is still just online, and I've said before and will say again that the Occupy Wall Street Movement showed that in the real world when the gov't wants something to go away it does.

    Basically we don't really have the freedom he's saying we'll lose. Real freedom is economic freedom. You're not free as long as somebody controls your access to food, shelter and health care. Until then you'll do exactly what they say and so will everybody else.

    If you want freedom stop bothering with all these surveillance scares and start asking what it takes to really be free. Ask yourself if you can ever be free in a world where 6 people [forbes.com] have more money than 100 million others combined?
  • The blades in the racks will get smaller and smaller, and cooler and cooler until... a bunch of hippie hackers decide to build a server farm on their kitchen table. The "dumb terminal" will sit right there on the kitchen table too. It'll be the same "cloud" architecture, but small and private. The PC (Personal Cloud) revolution will begin.

In the long run, every program becomes rococco, and then rubble. -- Alan Perlis

Working...