Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
IBM Supercomputing Hardware

Fifty Years Ago IBM 'Bet the Company' On the 360 Series Mainframe 169

Hugh Pickens DOT Com (2995471) writes "Those of us of a certain age remember well the breakthrough that the IBM 360 series mainframes represented when it was unveiled fifty years ago on 7 April 1964. Now Mark Ward reports at BBC that the first System 360 mainframe marked a break with all general purpose computers that came before because it was possible to upgrade the processors but still keep using the same code and peripherals from earlier models. "Before System 360 arrived, businesses bought a computer, wrote programs for it and then when it got too old or slow they threw it away and started again from scratch," says Barry Heptonstall. IBM bet the company when they developed the 360 series. At the time IBM had a huge array of conflicting and incompatible lines of computers, and this was the case with the computer industry in general at the time, it was largely a custom or small scale design and production industry, but IBM was such a large company and the problems of this was getting obvious: When upgrading from one of the smaller series of IBM computers to a larger one, the effort in doing that transition was so big so you might as well go for a competing product from the "BUNCH" (Burroughs, Univac, NCR, CDC and Honeywell). Fred Brooks managed the development of IBM's System/360 family of computers and the OS/360 software support package and based his software classic "The Mythical Man-Month" on his observation that "adding manpower to a late software project makes it later." The S/360 was also the first computer to use microcode to implement many of its machine instructions, as opposed to having all of its machine instructions hard-wired into its circuitry. Despite their age, mainframes are still in wide use today and are behind many of the big information systems that keep the modern world humming handling such things as airline reservations, cash machine withdrawals and credit card payments. "We don't see mainframes as legacy technology," says Charlie Ewen. "They are resilient, robust and are very cost-effective for some of the work we do.""
This discussion has been archived. No new comments can be posted.

Fifty Years Ago IBM 'Bet the Company' On the 360 Series Mainframe

Comments Filter:
  • by Peter Simpson ( 112887 ) on Monday April 07, 2014 @07:07AM (#46682377)
    Should be required reading for anyone planning to manage a large engineering project. It's full of tips that can save you from significant embarassment. If you're not managing a software development project, at least make sure your boss reads it. If your boss has *already* read it, he might be worth working for.
    • by Anonymous Coward

      Should be required reading for anyone planning to manage a large engineering project.
      It's full of tips that can save you from significant embarassment. If you're not managing a software development project, at least make sure your boss reads it. If your boss has *already* read it, he might be worth working for.

      I still have that book (one of my favorites). It was required reading while pursuing my graduate degree in Software Engineering.

      I also have fond memories of working on the IBM 360 iron writing assembly code.

      Without the big horses running in the background, technology and life as we know it would crawl at a snails pace.

    • by Anonymous Coward

      When I first heard about this book in my CS class I misread the title and thought it was called "The Mythical Man Moth".

      I thought that's gotta be a book worth reading even if it is about project management!

    • Re: (Score:2, Interesting)

      by Anonymous Coward

      I once worked on a project for a big client, and was duly impressed that the manager had a whole case (well, half-empty by the time I saw it) of copies of MMM. He offered me one and was pleased to hear that I already owned a copy. One of the best customers I ever worked with.

      It's a big warning sign when a manager has not only not read Mythical Man-Month, but has no idea what Brooks' Law is.

    • by ebh ( 116526 )

      Definitely. Add to that "Quality is Free" by Crosby, and "Peopleware" by DeMarco and Lister.

    • Maybe good for those starting out in the field, but I read it after a few years in the field, where I had already been through a few big projects. I didn't much I didn't really know or suspect already. It's pretty obvious working in the field that adding more people to a project, especially when it's already late doesn't make things go faster. I think that probably applies to many types of projects, not just software development. I can see how it might be useful for non-technical managers read it, because
    • I prefer to quote by ManBearPig-month.

  • by Anonymous Coward

    There's little point throwing away decades of refined code just for the sake of it. When it comes to financial systems and the law, the last thing any manager will do is push to move platform on their shift, no matter how many times Microsoft's reps come in to wine-and-dine those further up the ladder.

    • Re:software (Score:5, Interesting)

      by jfdavis668 ( 1414919 ) on Monday April 07, 2014 @07:20AM (#46682435)
      The problem is finding someone new willing to maintain the software. We have large systems running on big iron. The people who maintain it are getting older and fewer. We struggle trying to get someone new motivated to learn the technology. It's not an issue with the hardware, you can continue to upgrade that. It's finding someone who is willing to work in software that is no longer popular.
      • Re:software (Score:5, Insightful)

        by serviscope_minor ( 664417 ) on Monday April 07, 2014 @07:29AM (#46682493) Journal

        We struggle trying to get someone new motivated to learn the technology.

        I wonder how the banks end up getting people working in banking. After all, it's dull (yeah, the maths in the software is generally not that interesting), high stress and ultimately pointless. I guess they find *some* way of motivating those people.

        Basically, if you're running mainframes, then your business is large enough (heck the individual computers are expensive enough) that you can afford to pay top dollar to motivate some very solid programmers to work for you.

        Offer a good package with good benefits for what is in your region (e.g. healthcare in the US, 5 weeks time off in the US---these things are standard elsewhere so other regions will need other benefits), a low stress, no overtime working environment (no regular crunches or whatever), decent work-life balance sort of thing and a decent pay package and you will find good people. Oh, and training, too.

        You won't get the youngsters who are happy to burn out on 80 hour weeks for a year who want to hack the latest cool thing in the latest fad tech but with a small chance of becoming a billionaire, but you will get very, very good, experienced and almost certainly older programmers who want a work-life banance. They might have families, hobbies or even just shifted priorities, you see.

        You might have to train them up, but that's not goint to cost all that much in the grand scheme of things.

        Basically, if you can't get the people it's because you're not prepared to pay (that includes money, benefits and training).

        • by Anrego ( 830717 ) *

          You have to weight that cost, and the ongoing cost of that approach against migrating to something new.

          As pre-canned software becomes more flexible and cheaper, and talent to tweak it into what you need, simply tossing out a perfectly functional system starts to make more sense.

          Then again, we've got crap like SAP as a pretty good encouragement to pour more money into that old mainframe and hold off for a few more decades..

          • Re:software (Score:5, Insightful)

            by serviscope_minor ( 664417 ) on Monday April 07, 2014 @08:55AM (#46683109) Journal

            As pre-canned software becomes more flexible and cheaper, and talent to tweak it into what you need, simply tossing out a perfectly functional system starts to make more sense.

            True, but the annals of software engineering are littered of examples of hugely expensive failures along those lines. It is possible, but it is almost certainly much more expensive and much more difficult than most people in a position to pay realise. I think part of the problem is it's basically a wholesale change in one go. This makes it very difficult to have a staged migration of any sort.

            Also, every company is unique, especially those big enough to own their own mainframe. Those are also likely to be old and have baggage. That generally means an "off the shelf" system requires so much customisation it's more like a rewrite from scratch using a large, expensive and probably badly written framework.

            Here there be dragons.

          • by plover ( 150551 )

            You have to weight that cost, and the ongoing cost of that approach against migrating to something new.

            As pre-canned software becomes more flexible and cheaper, and talent to tweak it into what you need, simply tossing out a perfectly functional system starts to make more sense.

            Your first sentence makes a certain amount of sense, but your second sentence indicates a lack of appreciation for dependencies.

            Most people look at the cost of a system as the price of the servers, the price of the software, the cost of the project to implement them, and the ongoing maintenance contracts. What they often don't consider or fully understand is the impact to people and process. Changing a system means retraining the people who use the current process; changing paper forms, supplies, expectat

            • If SAP is progress, that can only mean that the prior process was done by filling out forms in Klingon using wax crayons writing on saran wrap in an iron foundry.

              Beh. When I started out Rodenberry hadn't been born and iron was in beta so we stuck to tried and trusted bronze.

        • by jythie ( 914043 )
          Beyond initial pay though there is a bigger problem, job prospects. Young programmers often look at jobs in terms of how good it will look on a resume when trying to find their next job, and mainframe jobs are perceived as being resume stains, filled with buzzwords that will get your resume thrown in the bin even at another company using similarly aged technology.
          • Re:software (Score:5, Informative)

            by thoriumbr ( 1152281 ) on Monday April 07, 2014 @08:45AM (#46683029) Homepage
            Looks like you know nothing about mainframes and "aged technology". I work with mainframes. zVM, DASD, DirMAINT, RACF and other buzzwords are in my resume, along with Linux, Java, PHP, XML, jQuery, MariaDB, HTM5, Eclipse and others.
            Mainframes are not aged technology. They are perceived as such by small companies and people. Big companies with big bucks know a lot about mainframes. They know mainframes are the most reliable hardware platform on the market today, and I guess it will continue as so for a couple of years, because mainframes were made from the start to be reliable. Other platforms got they reliability implanted on them. Mainframes were designed reliable and resilient.
            Mainframes today runs Linux too, not only the "aged mainframe operational systems." And here we have mainframes running hundreds of Linuxes with jBoss. They are about to be orchestrated by OpenStack, so managing all this "aged technology" will be done in brand new Android and iOS tablets.

            Job prospects in my area, at least for the next decade, are very good. Half the openings in my area are still open, paying for a intermediate zVM administrator almost twice what a senior Java programmer or MCSE will receive. And there's no people applying!
            But if the mainframe job market have a problem, is lack of people. Mainframes are not user friendly, and youngsters are not likely to devote two or three years learning something from the grannies, on a very harsh learning environment, with a step learning curve, when all their peers are talking about creating a new app and selling to Google for a gazillion dollars.
            Peer pressure is a greater force than job prospects. I faced this pressure when I talked to my peers that I was learning mainframe and everybody laughed at me. Now I earn 3 times what they do, and I am training some of them to work with me.
            • >>
              But if the mainframe job market have a problem, is lack of people. Mainframes are not user friendly, and youngsters are not likely to devote two or three years learning something from the grannies, on a very harsh learning environment, with a step learning curve, when all their peers are talking about creating a new app and selling to Google for a gazillion dollars.

              There is also the problems of: you cannot realistically teach yourself, no classes are offered, and you cannot get experience until you

              • But if the mainframe job market have a problem, is lack of people. Mainframes are not user friendly, and youngsters are not likely to devote two or three years learning something from the grannies, on a very harsh learning environment, with a step learning curve, when all their peers are talking about creating a new app and selling to Google for a gazillion dollars.

                Then don't insist on programmers in their 20s. There is a good and continually renewed supply of programmers in their 30s and 40s. The only reas

            • Re:software (Score:4, Interesting)

              by jandersen ( 462034 ) on Monday April 07, 2014 @10:57AM (#46684503)

              Mainframes are not user friendly, and youngsters are not likely to devote two or three years learning something from the grannies, on a very harsh learning environment, with a step learning curve, when all their peers are talking about creating a new app and selling to Google for a gazillion dollars.

              Well, that's the problem to solve, then.

              1) Make it less difficult to learn - this is only a matter of investing in producing good teaching material and making it easily available.
              2) Make the idea of mainframes much more appealing. There's loads of stuff in a mainframe and even in z/OS, that is way cooler than the average PC crap.
              3) Make it legal for people to download and run z/OS etc on the Hercules emulator for development and study purposes after a similar model like Oracle's

              People have taught themselves Linux and Windows, not because it is more interesting, really, but because it is much more approachable; and within the reach of a tight budget. Which teenager is going to invest tens of millions in a mainframe? Make it free, like Oracle did with their database - it worked for them.

            • paying for a intermediate zVM administrator almost twice what a senior Java programmer or MCSE will receive.

              Wow, that's good money. Do you program much, or just sit there keeping it running?

            • by Rinikusu ( 28164 )

              Would you mind sharing the geographical region you're referring to?

          • Re:software (Score:5, Interesting)

            by serviscope_minor ( 664417 ) on Monday April 07, 2014 @08:50AM (#46683073) Journal

            Beyond initial pay though there is a bigger problem, job prospects. Young programmers often look at jobs in terms of how good it will look on a resume when trying to find their next job, and mainframe jobs are perceived as being resume stains, filled with buzzwords that will get your resume thrown in the bin even at another company using similarly aged technology.

            Part of the problem is targeting young programmers then: companied often do because they're cheap, can be easily bullied into working long hours and don't have a family/life outside work. Older programmers generally demand more pay and less crap which makes them more expensive.

            The other thing of course is if you can offer training and/or a mixed job, e.g. 50% on mainfraim, 50% on whatever more modern front end the mainfraim connects to, you can also keep your employees current with their skills. Quite possibly more expensive, but it may well have hidden benefits too to have an experienced programmer with experience and knowledge of the complete system.

            Either way, though it still comes down to cost.

        • Basically, if you can't get the people it's because you're not prepared to pay (that includes money, benefits and training).

          I agree with the post (just quoted the last part to save space), but I'd also point out that banks are going to have to overpay to get young people interested in learning this. You're trying to get new workers interested in what actually is dying technology. If one day your bank has an epiphany and decides to port everything to Linux, those trained young workers are likely to be out of a job and finding that the number of people who use that old technology is shrinking, not growing. Your bank could get b

          • by bws111 ( 1216812 )

            What computer do you consider 'more modern' than an IBM EC12? What makes you think the technology in mainframes is 'dying'?

            • by bored ( 40072 )

              What makes you think the technology in mainframes is 'dying'?

              Fewer actual machines being installed. No new projects being started on native mainframe tech (new mainframe projects seem to be overwhelmingly Linux/java/other platform agnostic technologies). IBM advertises the fact that their "capacity" install numbers are going up every year, but the machines have been getting significantly faster the last few years as IBM started taking machine performance seriously again so they bury the bad news.

              • by bws111 ( 1216812 )

                What are you talking about? What the heck is 'native mainframe tech'? z/OS? By that logic, x86 is also 'dying' because servers are moving from Windows to Linux. In 2012 IBM sold more mainframes, as measured in units, capacity, and dollars, than at any point in it's history. Over half of the capacity was in the form of 'new workload' engines. In other words, the market grew, not shrank.

                And what do you mean by 'taking perfomance seriously again'? There has never been a time when they didn't take perfo

                • by bored ( 40072 )

                  What are you talking about? What the heck is 'native mainframe tech'? z/OS?

                  Yes, basically, technology that provides vendor/platform lock for IBM...

                  For example, many of the java workloads can be migrated to some other platform with relative ease (aka POWER). No so with the huge pile of languages/technologies that exist primary on the mainframe (JCL, RACF, on and on).

                  2012 IBM sold more mainframes, as measured in units, capacity, and dollars, than at any point in it's history

                  I would like to see the reference

                • by bored ( 40072 )

                  Its funny that you cite 2012, cause this is one of the first google hits I get.

                  http://www.reuters.com/article... [reuters.com]

                  With such wonderful quotes as:

                  "Officials with IBM said the company has "thousands" of mainframe customers around the globe but declined to be more specific.

                  Gartner estimates that annual global sales of mainframes will fall this year and each year through 2016, declining a total of 14 percent over the five years to nearly $4.7 billion."

                  I wonder how many of those "thousands" are like us. We have a s

        • Spent most of my career working in banking. No, it's not dull, yes (like every other career) it's stressful, no it's not pointless. Banking also makes use of much more than mainframes; Tandem, networks and PC's for instance. Perhaps one of the reasons is the attitude you can see displayed here frequently, older tech is "bad" and to be disparaged for the newest thing on rails.
        • by bored ( 40072 )

          Basically, if you can't get the people it's because you're not prepared to pay (that includes money, benefits and training).

          I'm going to second this. Because I had a z114 dropped on my lap as part of my current job. I hear about the talent shortage all the time. I even took the time to do some basic research on mainframe pay scales... And let me quote some other guy answering a similar question..

          "why should I learn mainframe tech, when I can make 30% more doing PHP, and I don't have to worry about being sid

        • by khchung ( 462899 )

          Basically, if you're running mainframes, then your business is large enough (heck the individual computers are expensive enough) that you can afford to pay top dollar to motivate some very solid programmers to work for you.

          That's your problem right there. Obviously, the unsaid assumption is that, by someone "new", they really mean someone "cheap".

          There was never any difficulty in finding motivated and smart people when companies are willing to pay. The problem is most companies are NOT willing to pay.

      • Re:software (Score:4, Insightful)

        by K. S. Kyosuke ( 729550 ) on Monday April 07, 2014 @07:32AM (#46682507)
        Use the free market solution: offer a sufficiently high salary!
      • Re: (Score:3, Interesting)

        by Tom ( 822 )

        That's because the software is largely crap. I say that as someone who still learned COBOL and yes, on a mainframe, in university.

        Seldom have I been so glad to forget everything about a programming language as quickly as possible after passing the exam.

        The thing about old systems is that there are some that got lots of things right - Multics ACL and security still runs circles around Unix and giggles about Windows - and some of them were just horribly misguided (like COBOL, the programming language invented

        • by Anonymous Coward

          Business types could understand it. It is far easier to teach programming to an accountant (General Ledger, payroll, billing), engineer (power plant modeling) etc then to teach a programmer the business/engineering. And we were teaching them PL/I. with pointers and call backs.

          • And we were teaching them PL/I. with pointers and call backs.

            Not with blackjack and hookers? PL/I has everything in the language. ;-) (Which is probably why it failed to gain any broader acceptance in the first place.)

          • by Tom ( 822 )

            I disagree on that. I've seen plenty of management and business types "do programming" with their excel and access scripts and word macros, or with SQL or javascript or whatever else they have available. Because their first mistake is taking what they have available, and not what's the proper tool for the job.
            The problem with teaching non-techie people programming is that you end up with software that I would've ripped you a new one for back in university when I was the assistant for the C programming cours

        • (like COBOL, the programming language invented specifically so the business types had the wrong impression they could understand it).

          Oh bullshit. Do you realize that there's levels to COBOL beyond one, right? That it's business oriented was not to sell to bosses, but to streamline development for business applications. Get your head out.

          • by Tom ( 822 )

            There's a difference between writing a language streamlined for a specific purpose and writing a language where a = 1+1; is expressed as ADD 1 TO 1 GIVING A.

            I'm fairly sure I've heard every argument pro-COBOL. I studied this stuff. I've had this discussion a dozen times. I remain unconvinced. :-)

            • I was going to mod you up as I once had to study COBOL for exams, a long time a go. But then I clicked on your hidden replies and my, oh my. I had to reply instead to say that you really have attracted one of the most virulent trolls that I've ever seen on slashdot. You should get some kind of flair next to your username or something.

            • Funnily, the ADD 1 TO 1 GIVING A example seems to elude the usual computer bullshit. Semi-colon, what does that mean? a=1+1, is that assignment or test for equality? And superficially COBOL looks cool. ENVIRONMENT DIVISION, DATA DIVISION etc. that shit looks and sounds better than that old tired preprocessor crap and free-roaming braces.

              In fact I guess I would like to get vocational training in COBOL and COBOL systems (getting paid while learning it) and then raking in big numbers, showing at 11 on the job

        • by clintp ( 5169 )

          That's because the software is largely crap. I say that as someone who still learned COBOL and yes, on a mainframe, in university.

          I didn't pull any good lessons out of COBOL decades ago, however the designs around RPG turn out to be surprisingly useful even today. The basic concepts of header, details, running totals, nested breaks, subtotals, etc.. don't seem to be easy for programmers, and the interfaces to them in modern reporting systems are universally terrible.

          All the while RPG handles this stuff like breathing, in a minimal problems kind of way. Plus the event-loop concept of processing incoming records and calculations is st

      • by jacobsm ( 661831 )

        You're 100% correct, but I'll add that it's very difficult to get management to bring in new people and give them the opportunity to learn from people who've had decades of experience in the technology and systems that the business depends on.

        In my case I'm coming up on 36 years experience in the mainframe world, and I've got no one to teach my skillset to. As for people not wanting to work in a mainframe environment I've got a few comments that might help change their minds.

        1) The mainframe isn't going awa

      • Really? How much are you willing to offer for this motivation?

    • There's little point throwing away decades of refined code just for the sake of it.

      I agree, but who's saying otherwise?

      • On the contrary, there was lots of reason to suspect old code of being inefficient on new machines.

        Much of that old code used clever techniques, highly rewarded when they were developed, to fit the software to the limitations of those ancient machines. When you have 48 K of core, and that is all you've got, you choose algorithms that can be written in tiny loops that will fit, and you use re-entrant techniques so that the code that is already in place for the date calculation can be re-used to calculate pa

      • Comment removed based on user account deletion
        • This depends on just how far we run with just for the sake of it.

          They both have perfect/near-perfect X11 backward compatibility. Not quite the same as demanding that all that business-critical COBOL be rewritten in Scala.

          (Apparently Ubuntu had hopes [wikipedia.org] to phase out the X11 compatibility, though.)

  • "We don't see mainframes as legacy technology," says Charlie Ewen. "They are resilient, robust and are very cost-effective for some of the work we do."

    Love this kind of talk! Go get'em Charlie Ewen!

  • by Hognoxious ( 631665 ) on Monday April 07, 2014 @07:21AM (#46682445) Homepage Journal

    Would've got an FP if I hadn't dropped the card deck.

  • by Anonymous Coward

    Data and processing on a remote computer, accessed via a dumb terminal.

    Yep, that's "cloud computing".

    Everything old is new again.

    • Yup, the cloudies reinvented timesharing (;-))

      What they don't have, however, is a uniform memory architecture. Modern large processors (running AIX, Solaris, etc) are non-uniform memory (NUMA) machines, with memory on the same board as the cpu being faster then memory on the buss.

      Memory on cloud/array-computing machines is the extreme of NUMA: the "bus" is an ethernet (;-))

      On mainframes, the memory is in the "center" with the CPUs around it in a ring, using a "system controller" (the Honeywell term)

  • I'd estimate that it killed something like ten years of pushing research results into practice (out-of-order execution, largely invented in the 1960s, didn't really catch on "thanks" to S/360 until 1990s - because it had the unfortunate distinction of having been invented in a non-S/360 project that got cancelled).
    • by serviscope_minor ( 664417 ) on Monday April 07, 2014 @07:37AM (#46682553) Journal

      I'd estimate that it killed something like ten years of pushing research results into practice

      I don't see how. Apparently the CDC6600 was OoO in the 1970s. I think the main problem is that OoO requires a lot of resources.

      I think it took until the 90's because before then there were just not enough on-chip resources to make it worth doing out of order. There were other things that took higher precedence, like wider busses (moving up to 32 bit at the time), things like hardware multiply and divide, wider static issue, floating point in hardware, etc.

      In other words, OoO is only really worth it when your processor is so wide that you can't easily fill all the execution slots with static scheduling.

      • I don't see how. Apparently the CDC6600 was OoO in the 1970s.

        I don't think so. It's my understanding that CDC6600 handled interlocks but not reorderings. My point wasn't really as much about whether it was possible to widely employ this in the 1970s but rather that IBM got onto a track that essentially cast the S/360 ISA into stone (an irony, given how much was this ISA actually designed to be microprogrammed), and without that, a project similar to IBM 801 combining its own research with the ACS-1 results probably would have happened earlier. For a long time since t

      • by Required Snark ( 1702878 ) on Monday April 07, 2014 @08:32AM (#46682919)
        The IBM Stretch had an early form of out of order execution. This was in 1959.

        http://people.cs.clemson.edu/~mark/stretch.html [clemson.edu]

        Amdahl discussed his original idea for lookahead with John Backus "two or three times". "And John thought what I had proposed initially, he couldn't do a compiler for. So we went ahead and redid it. And we came out with the thing that was the look-ahead structure of the STRETCH." [p. 71, Norberg]. Amdahl recalls that "principally the look-ahead pre-fetched instructions to see branch instructions early enough so that we could get the succeeding instruction and data for each of the two alternative branch paths"

        The CDC6600 a more advanced form in 1964.

        http://en.wikipedia.org/wiki/Out-of-order_execution [wikipedia.org]

        Arguably the first machine to use out-of-order execution was the CDC 6600 (1964), which used a scoreboard to resolve conflicts. In modern usage, such scoreboarding is considered to be in-order execution, not out-of-order execution, since such machines stall on the first RAW (Read After Write) conflict. Strictly speaking, such machines initiate execution in-order, although they may complete execution out-of-order.

        From the same source:

        About three years later, the IBM 360/91 (1966) introduced Tomasulo's algorithm, which made full out-of-order execution possible.

        • Why do you people keep mentioning Stretch and CDC 6600 and completely ignore ACS? ACS certainly had vastly more to do with the modern notions of high-performance OoO architecture than either Stretch or CDC 6600 ever did. Informative and insightful, my ass.
  • When I arrived at Carnegie-Mellon University in 1968, all programs were running on a Univac 1108, soon to be replaced with a much more powerful IBM 360. In those days every science major learned to code in their freshman year. You would type your program onto punch cards, one instruction per card, then type your data onto cards, and dump into the submission box. Hours later you'd pick up your printout in your (physical) mailbox. Faster turnaround if you submitted at say 2AM. No security at all in those
    • by Anonymous Coward

      No security at all in those days, so occasionally your program cards would be stolen

      For once, the phrase 'software piracy' is accurate here.

    • My older brother shuffled your deck.

      • My older brother shuffled your deck.

        Smart developers punched a sequence number in columns 73-80. A few passes through the sorter and you're good to go.

    • Same here.

      We had a 360/50 that occupied one entire floor of the building.

      Turn in your cards then wait 12 hours to get your print out and see if it even compiled.

      Basicly if you had a CS problem due in a week you had 14 chances to write your program and get all the bugs out before it was due.

    • When I was at CMU from 1990 - 1994 the CS department had a couple of rooms full of old discarded and no longer used mainframe and mainframe support equipment. We wandered through there once or twice just to see what old computers looked like. I probably saw your IBM 360 surrounded by dozens of big refrigerator sized reel to reel tape machines (there were alot of those) and didn't even know what I was looking at.

  • Too bad that IBM is long, long gone.

  • Although these days the hardware is System 390/Z-series, but I still login via TSO, I review COBOL code with comments going into 1980 (and yes, they all have Y2K patches)... The financial industry *never* throws anything away (especially if it's still making them money). Except programmers. Those they throw away.

  • Good Mental Floss (Score:5, Interesting)

    by worker17 ( 2525968 ) on Monday April 07, 2014 @08:56AM (#46683119)
    I started on an IBM 360, doing assembler coding. Still have the IBM books I bought at the college bookstore. I was always amazed how much it felt like coming up from deep sea diving after a day of coding registers, doing multiplication via shift commands and all the other great little tricks that now seem ancient history. I still find myself comparing manuals based on how well they follow basic IBM rules: you can not self-reference a term in explaining the term, the explanation must not reference other terms that are not explained or that can not be identified as precursors to the term. It was a great machine to learn on.
  • ... revere the COBOL, for Holy is the COBOL. Thou shalt take no other language before it ...
  • My second computer was a 360. I began life coding Fortran IV on one of the 360's immediate predecessors, the IBM 1410. At the time, mainframes occupied two distinct categories: "business" machines like the 1410, which organized data as individual 6-bit bytes, and "scientific" mainframes like the 7090 series, which saw data as 32-bit integers and floats. Most programming as done in machine-dependent assemblers, which were totally different on each machine.

    The 360 merged the two styles of computing. Memory wa

    • My second computer was a 360. I began life coding Fortran IV on one of the 360's immediate predecessors, the IBM 1410. At the time, mainframes occupied two distinct categories: "business" machines like the 1410, which organized data as individual 6-bit bytes, and "scientific" mainframes like the 7090 series, which saw data as 32-bit integers and floats.

      36-bit. There was also the 1620, which organized data as 4-bit decimal digits (with an extra flag bit and a parity bit); a character took two digits.

  • Given its was not the best standard - 86x with BIOS. But it was a standard countless competing companies did optimaize until the profit level dropped below IBMs tolerance and they sold it to Lenevo.
    • by LWATCDR ( 28044 )

      Not at all. The once ISA to rule them all did not last at IBM for long. IBM started to live in fear that the US government would break them up as a monopoly. IBM started to make mini computers like the system 36 and system 38 that didn't use the 360 ISA to make breaking up the company easier.
      IBM did make a 16bit version of the 360 so it could have made the PC into a 360 based computer. IBM even made an IBM pc that would run 360 code using a custom microcoded 68000.
      The PC used the 8088 because that is what w

  • by oldmac31310 ( 1845668 ) on Monday April 07, 2014 @11:13AM (#46684707) Homepage
    Is Slashdot really just getting worser and worser? What the f*#$ kind of grammar is "but IBM was such a large company and the problems of this was getting obvious". And like, they done maked a betterer computer than what they had maked before, I expect. And selled it to alot of they're customers and stuff. Their very clever at /. More cleverer by far than they are rivals. Bollocks
  • I learned this one at Genericon, but it's older than that:

    Music: "The Children's Marching Song"
    Robert Osband, c.1974
    This machine, it played one.
    It pushed start and program run.
    It's an IBM 360/85;
    This computer came alive.

    This machine, it played two.
    Overloaded voltage to the CPU.
    It's an IBM 360/85;
    This computer came alive.

    This machine, it played three.
    Designed its memory to 1 IC.
    It's an IBM 360/85;
    This computer came alive.

    This machine, it played four.
    Changed its logic from AND to OR.
    It's an IBM 360/85;
    This co

  • My one regret is I never learned any mainframe technology except from the client end. Over the years of my career, I worked with pretty much every other platform and OS that was available except for the mainframe and AS/400.

    It's not an issue of marketability; I'd still be unemployable due to my migraines and therefore out of work. But it would have been fun to tackle yet another platform.

If it wasn't for Newton, we wouldn't have to eat bruised apples.

Working...