Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?

The Real Story of Hacking Together the Commodore C128 179

szczys writes "Bil Herd was the designer and hardware lead for the Commodore C128. He reminisces about the herculean effort his team took on in order to bring the hardware to market in just five months. At the time the company had the resources to roll their own silicon (that's right, custom chips!) but this also meant that for three of those five months they didn't actually have the integrated circuits the computer was based on."
This discussion has been archived. No new comments can be posted.

The Real Story of Hacking Together the Commodore C128

Comments Filter:
  • Re:Mistake (Score:3, Insightful)

    by fisted ( 2295862 ) on Monday December 09, 2013 @04:25PM (#45642919)

    Want to do a crazy program you can't write on modern computers?


    Simply loop through a sequence of poking two random numbers, and incrementing a number that you print.


    Every time, the system will do different things.

    What ?

    If you did this on a modern computer, eventually it'd corrupt system files and the thing wouldn't boot.


    It makes you wonder why modern OSes aren't hardened with the theory: No matter what the user does, allow the computer to boot up safely next time.

    You're an idiot.

  • by phantomfive ( 622387 ) on Monday December 09, 2013 @05:05PM (#45643321) Journal
    You never know what marketing will do to you as an engineer.

    a couple of weeks later the marketing department in a state of delusional denial put out a press release guaranteeing 100% compatibility with the C64. We debated asking them how they (the Marketing Department) were going to accomplish such a lofty goal but instead settled for getting down to work ourselves.

  • Wrong (Score:4, Insightful)

    by Anonymous Coward on Monday December 09, 2013 @06:36PM (#45644571)

    Since I designed, wirewrapped, and programmed embedded 6502 and 8080 system in that era I am well prepared to assess your claims. In a nut shell you are an arrogant tard and the original poster is figuratively accurate inexact.

    Your post is really bizarre.

    THe 6502 was an amazing processor. the Apple II was also a 6502. Unlike it's near contemporaries, the 8086 and Z-80 (and 6800),

    Ok, with you there.

    Nearly every instruction took a microsecond. Thus while the clock rate was 1 Mhz, it was much faster than a 4 Mhz 8080 series chip since those could take multiple cycles to do one instruction.

    Well, that's just a bunch of crap: http://www.obelisk.demon.co.uk/6502/reference.html [demon.co.uk] (look at the "Cycles" column.)

    What the original poster was likely saying, since it becomes clear later in the article, was that all the 6502 instructions were divided up into alternating cycles of memory fetches and internal calculations with an exact period of 1 microsecond for those. The 8080 series would use 1,2,3,4 and more with wait states cycles for an instruction with no regular pattern (in terms of future predictable times) of when the bus would be busy.

    So you are wrong, have a reading comprehension problem, and are an ass about it.

    Few memory chips (mainly static memory) could keep pace with that clock rate so the memory would inject wait states that further slowed the instruction time. The 6502's leisurley microsecond time was well matched to meory speeds.

    The Wikipedia article on the 6502 indicates that DRAM access times were on the order of 250ns - 450ns. In particular, 250ns access times are well-matched to 4 MHz clock rates; do the math. At 1 MHz, 250ns DRAM has time to go make a sandwich before it needs to supply the next memory cells.

    Sigh, again you have a reading comprehension problem. The original author was discussing static memory. Moreover, the cycle time for memory access always involves some overhead. The time when the CPU reads the data bus needs to occur after the bus has settled which is not at the start of the memories data valid period. But most of all 250ns memory was rare and expensive. Most computers in that time period did use wait states. Why do you think processors even allowed wait states?

    Again you are being an ass about this as well.

    On the 8080s using main memory you could often see gltiches on video displays that would happens when the video access was overridden by the CPU access at irregular clock cycles.

    No. Then, as now, video display glitches were caused by updating video RAM directly outside of a VSync pulse. You could just as easily get video glitches on 6502s as on 808x machines.

    that was an additional restriction on 8080 machines. But on 6502 machines one did not have to wait for the vertical sync to update the video memory. In fact that is EXACTLY what the original poster was pointing out, without trying to flaunt jargon like you.

    This makes you look stupid now.

    Which leads us to:

    As a result most 8080 series based video systems used dedicated video card like a CGA or EGA. Hence we had all these ugly character based graphics with slow video access by I/O in the Intel computer world. In the 6502 world, we had main memory mapped graphics.

    Patently false. Video memory on an 808x machine (even on CGA and EGA cards) was most certainly memory mapped.

    yes it could be done. But then you had the problem of glitches or waiting for VSYNC (or if you liked to live dangerously, HSYNC). It wasn't pretty to build hardware or write code for. Your interaction with it didn't treat it like main memory but rather so

  • Re:U.S. Navy? (Score:5, Insightful)

    by Dogtanian ( 588974 ) on Monday December 09, 2013 @07:24PM (#45645167) Homepage

    I called it "Commode-odor". I was an Atari fan, but most of my friends had C=64s. A few years later, though, I got an Amiga.

    Assuming you mean the 8-bit Atari 400 and 800 (and its compatible redesigns, the XL and XE series), I did pretty much the same thing- was an Atari fanboy, but ended up with an Amiga. When one knows a little more about the "Commodore" Amiga and "Atari", it all seems a bit silly.

    The major irony is that the Amiga developers included a number of ex-Atari staff- most significantly Jay Miner- who had worked on the 400/800 and the VCS/2600 before that. It represented (some have argued) a continued thread of architectural design that the 400/800 had significantly improved upon from the VCS, and had the same state-of-the-art custom chipset approach as its predecessors. (Indeed, just as happened with the 400 and 800, the Amiga was originally meant to be a console, before it evolved into a computer).

    Also worth noting that "Amiga" was originally an independent company and it was only later bought by Commodore (after some legal wrangling with Atari, who'd had some involvement with them).

    Meanwhile, Jack Tramiel had left Commodore (after falling out with the management), bought Atari Inc's computer and console division (i.e. the one that brought us the VCS and 400/800), which formed his new Atari Corp. The latter was a very different company to Atari Inc. (very obviously a much more shoestring operation). The Atari ST was designed by a different team after Tramiel had sacked most of the old Atari Inc. engineers, and very much reflected the "new" Atari; affordable, but much more off-the-shelf parts.

    Atari Corp continued selling the XL and XE (cost-reduced versions of the 400 and 800), but they didn't design it; they merely milked the profits from a design they'd inherited while they focused on *their* Atari ST.

    So... which was really the "true" successor to the Atari 400 and 800? By any measure, it was the "Commodore" Amiga. Who cares who made it? I briefly owned an ST because I couldn't afford an Amiga, but I ended up selling it and buying the latter a year later.

"Say yur prayers, yuh flea-pickin' varmint!" -- Yosemite Sam