Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Hardware Hacking Hardware Apple Build

Turning the Arduino Uno Into an Apple ][ 113

An anonymous reader writes: To demonstrate how powerful modern computers are compared to their forebears, engineer Damian Peckett decided to approximate an Apple ][ with an Arduino Uno. In this post, he explains how he did it, from emulating the 6502 processor to reinventing how characters were displayed on the screen. "The Apple II used a novel approach for video generation, at the time most microcomputers used an interlaced frame buffer where adjacent rows were not stored sequentially in memory. This made it easier to generate interlaced video. The Apple II took this approach one step further, using an 8:1 interlacing scheme. This had the first line followed by the ninth line. This approach allowed Steve Wozniak to avoid read/write collisions with the video memory without additional circuitry. A very smart hack!" Peckett includes code implementations and circuit diagrams.
This discussion has been archived. No new comments can be posted.

Turning the Arduino Uno Into an Apple ][

Comments Filter:
  • Very cheap modern computer is capable to emulating a 28 year old cheap computer.

    • by marka63 ( 1237718 ) <marka@isc.org> on Tuesday April 07, 2015 @01:55AM (#49420477)

      Apple II w/ 4K of memory would cost $5236.87 ($1298) in todays dollars. While this may be a lot less than a lot of computers at the time I wouldn't call it a cheap computer by any stretch of the imagination.

      • To be fair, he was mostly emulating a MOS 6502 that would cost about $125 in today's dollars.

        • by itzly ( 3699663 )

          Actually, the 6502 only cost $6.95 in today's dollars.

          http://www.mouser.com/ProductD... [mouser.com]

    • by gl4ss ( 559668 )

      well, kind of.

      I mean it's neat, because it's on such a crapp microcontroller.

      the atmel avr used is pretty much an ancient microcontroller today. I mean, it's pretty much slower than the first pc our family had(8mhz x86 vs atmel 8mhz, or maybe 16, depends). our first pc also had a video card to take care about the graphics display and so forth...

      sure, if you loaded an emulator on raspberry pi, I wouldn't give a shit or any props. but for this yes.

      • Well, the 6502 was the shittiest processor of it's generation (which is why the Apple dudes could afford it) so it makes sense to emulate low end with low end.

        Your first PC had a CRT controller (likely a 6845 or derivative.) That's a relatively simple LSI chip. People were doing 'cheap video' with software even back in the day (i.e. Lancaster's cookbook)

      • by sjames ( 1099 )

        The AVR is a fine microcontroller. It isn't meant to be a CPU for a PC.

        • by gl4ss ( 559668 )

          I said that it was ancient, not that it was shitty as such. well, that kind of makes it shitty if you compare what is on the market in 2015 - for any application. it's not particularly cheap either and in official arduino uno board it's ridiculously expensive for what it is.

          sure, my 3d printer runs on atmel avr. that's what makes the firmware rather shitty from what it could be and impossible to improve upon, there's no space left on the rom and the ram is exhausted and there's no empty cycles(96% of home 3

          • by sjames ( 1099 )

            You must have a very different definition of crap than most of us use. It's not ancient either, it is current.

            Perhaps you don't know what a microcontroller is for? Hint, for the purposes a microcontroller is intended, the home computer would be a miserable choice, then or now.

            • by itzly ( 3699663 )

              It's for sale right now, but it's still ancient. You can get an ARM CPU for roughly the same price, with 10 times the clock, more peripherals, more choice of packages, more memory, and 32 bits instead of 8.

              • by sjames ( 1099 )

                ARM is more expensive. You can get an AVR for $3 quantity 1. ARM also requires more of the board it's connected to and still consumes more power.

                I like ARM and when the capability is needed, I wouldn't hesitate to recommend it but it's not a one size fits all world.

                • No, ARM isn't more expensive. Try, for example, the ST Microelectronics STM32F030R8T6. That's a Cortex-M0 ARM, 48MHz, 64K flash, 8K RAM, 55 I/O pins. $2.22 in quantity one. Reference: http://www.digikey.com/product... [digikey.com] That's just one part I happen to be familiar with; there may be even cheaper ARM alternatives out there.

                  Quantity one price of an ATMega328? $3.25. That's the surface mount version; the DIP is $3.38. Reference: http://www.digikey.com/product... [digikey.com]

                  It's true that if you stay with Atmel, ARM will b

                  • by sjames ( 1099 )

                    ATTINY2313-20PU $1.62 [verical.com].

                    Not as fast and not as powerful, but if that's all you need, why overbuy? If you slow it way down, it comes in at 20 micro amps @ 1.8v

                    There's nothing wrong with the ARM, it's just not always what is needed.

                    One reason AVR looks more expensive is that it is currently the cool maker choice so you see a lot of them at vastly inflated prices. One reason it's 'cool' is it's ease of use and minimal demands for support circuitry.

                    • Another factor is that the AVR chips are mostly still 5 volt parts. That means that they have to be made with a very out-of-date process and are much larger than current designs. (The processors used in AVR Arduinos can be run all the way down to 2V at reduced performance, but the fact that they allow 5V operation dictates the process used.) All the microcontroller ARM chips that I am familiar with are 3.3 volt chips (that's the maximum, most can also be run at lower voltages, typically down to 1.8V); highe

                    • by sjames ( 1099 )

                      That is a very good point. I'm in a project now that needs low power. It is nice to be able to power sensors off of one of the DIO pins so I can power them down at will without adding to component count. And as you say, the 5V design will be more robust for little effort.

                      I tend to think of < 5V as something you do if you have to, never as a first choice.

            • by gl4ss ( 559668 )

              in the purposes it serves in many arduino projects, it's crap. it's used for all kinds of things it's crap for. realtime motor controls, motion planning, audio analyzing, you name it. all kinds of stuff it's pretty crappy for but it's the "standard", so it gets thrown in there.

              the arduino atmels are a) not low power use b) not powerful.

              from what I can look at volume pricing, the volume pricing of the atmels isn't that great either.

              it's not intended at any specific application either. it's intended to do any

      • by wed128 ( 722152 )

        Dunno about that...i think the AVR instruction set is probably faster at 8Mhz then an 8086 at the same speed; it's at the very least equivalent. The x86 instruction set really is a mess...

        • by itzly ( 3699663 )

          The x86 instruction set really is a mess...

          Clock for clock, a modern intel CPU is much faster than an AVR. Who cares if it doesn't look pretty, when it gets the job done ?

    • Re: (Score:2, Interesting)

      by Anonymous Coward

      > Very cheap modern computer is capable to emulating a 28 year old cheap computer.

      Actually, it may contradict common sense, but you're missing part of the "history".

      As technologies evolve and brands consolidate, some old ideas are lost. Things like the mentioned 8:1 interleave, floppy drive skewing schemes or they way images are generated in vector display are harder to simulate (though feasible). And even if simulated, not everyone would know how to use them. I'm particularly reminded of ATARI 2600 imag

      • Yeah. Most of the emulators out there use hacks to get them to run smoothly on today's hardware. There's a lot of emulators out there that don't run exactly as the old hardware did. Some emulators like BSNES [emulator-zone.com] actually try to run the machine code exactly the same way the old consoles did, while others take short cuts and there fore might only work with the most popular games.

        There's even a fair number of N64 emulators that can't play Ocarina of Time properly. There's a part very early in the game where y
        • by Megane ( 129182 )

          Back in the mid '90s, I think everyone was surprised to find that you needed at least a 486DX-25 to emulate the Atari 2600. It was because the 2600 required cycle-accurate timing to emulate it properly. You could, and everyone did, do stuff with the Stella chip (which I call a "1-D" graphics chip) in the middle of a scan line, sometimes abusing its counter registers in interesting ways.

          The N64 was a different beast with emulation. I think the biggest problem was needing a lot of RAM to emulate it properly

  • by Misagon ( 1135 ) on Tuesday April 07, 2015 @02:14AM (#49420509)

    The cool things are that he used a 8-bit AVR microcontroller to emulate the 6502, and that he used a USB chip on the prototyping board to create video...
    Unfortunately, it runs much slower than a 1MHz 6502.

    It appears that he did his own reverse-engineering of the 6502. One peculiarity that he may have missed is that it has undocumented op-codes, and those do show up in some programs.
    Other people have done much more reverse engineering of the chip, down to the gate level even.

  • Interlacing? WTF? (Score:4, Interesting)

    by Megane ( 129182 ) on Tuesday April 07, 2015 @02:22AM (#49420523)

    most microcomputers used an interlaced frame buffer where adjacent rows were not stored sequentially in memory

    First of all, I know a lot about micros from the late '70s and early '80s (I was there, maaaan!), and I can't remember a single one other than the Apple II series that didn't display rows sequentially.

    This approach allowed Steve Wozniak to avoid read/write collisions with the video memory without additional circuitry.

    I'm pretty sure the story I heard was that it saved one TTL chip in the counter chain to do it that way, which was just the kind of thing Woz would do.

    Collisions? Exactly what kind of collisions are you talking about? IIRC, the Apple II used interleaved access, where the 6502 would access RAM on every other clock, and the video would access it in between. (This method was also used on the original Macintosh, though the 68000 sometimes needed a wait state.) But that has nothing to do with the funky row counters.

    • by dohzer ( 867770 )

      I don't understand the article's explanation. Anyone care to elaborate?

      • Re:Interlacing? WTF? (Score:5, Informative)

        by msauve ( 701917 ) on Tuesday April 07, 2015 @06:06AM (#49421045)
        Woz designed a video system which very gracefully solved a few problems. For memory, one could choose from static or dynamic RAM. Static was easy to use, but costly. Dynamic RAM was required to be refreshed [wikipedia.org] (accessed every couple of ms), or stored bits would simply be forgotten. But you could get higher capacity chips at lower cost.

        Woz designed the Apple ][ video system so the order it read data from RAM automatically fulfilled the DRAM refresh requirements. And, the video system reads were interleaved with CPU access, so the CPU never had to wait while video or DRAM refresh was happening, as was common with other designs.

        The claim of an 8:1 interlace isn't really correct. The bitmapped memory layout used an 8:8:1 interleave. The first 8 rows were addressed 0x400 apart, that pattern was then repeated 8 times with an offset of 0x80. Details can be Googled. Part of the reason for that is so DRAM refresh hit every required location often enough.
    • by ljw1004 ( 764174 )

      I'm pretty sure ZX Spectrum did interleaving with 8:1 ratio.

      When a game loaded off cassette tape and displayed its loading screen (took about 10-20 seconds I think), you'd see pixel row 0, 8, 16, ... Then pixel row 1, 9, 17, ...

    • by ljw1004 ( 764174 )

      Ah, here's an article and great video that demonstrates the ZX spectrum's interlacing:

      http://whatnotandgobbleaduke.b... [blogspot.co.uk]

    • First of all, I know a lot about micros from the late '70s and early '80s (I was there, maaaan!), and I can't remember a single one other than the Apple II series that didn't display rows sequentially.

      The BBC micro outside of Teletext mode didn't. It had a quite whacky scheme which (in Mode 0) went something like this:

      The first byte corresponds to the first 8 pixels of row 1, one bit per pixel. I can't remember which endianness. The second byte corresponds to the first 8 pixels of row 2. And so on up to and

      • Re: (Score:3, Interesting)

        by Megane ( 129182 )
        So... we have two or three minor examples, hardly the "most microcomputers" as stated in the summary. TRS-80 didn't, Atari (with its complex display line modes) let you do whatever you wanted, Commodore didn't, and no big-iron S-100/SS-50 had non-sequential row mapping. Those were the "most microcomputers" up until the time of the IBM PC in 1982, which also had sequential video row mapping. Also, TMS-9918-based systems and MSX used a video chip with separate RAM, very much sequentially mapped. And ditto for
        • Mate, WTF?

          You said you couldn't think of any computers other than the Apple II which had non-contiguous rows. I provided an example, which I assumed you wanted since you couldn't think of any others. No need to go on the attack about something I'm not disputing (whether it was the majority).

          Anyway the BBC wasn't exactly unique, as it used the 6845 http://en.wikipedia.org/wiki/M... [wikipedia.org] which also used on other machines. Maybe not the majority, though I've no idea.

          • by Megane ( 129182 )
            The summary said "most microcomputers". Two or three out of a one or two dozen isn't "most". And I wasn't "attacking" you, I was summarizing you and the other two or three people who responded as showing TFS being way off the mark.
    • by Mr Z ( 6791 )

      I came here to say pretty much exactly what you did. The funky addressing saved a chip. It's pretty widely documented / known.

      Yes, the video used opposite bus phases from the CPU (and doubled as refresh counter for the DRAMs), so there were no wait states due to video fetch. But as you point out, that has nothing to do with the Apple ]['s weird video memory map.

    • C64 used a non-sequential scheme that mirrored it's character display.

      8 bytes sequential on most machines means a linear series of pixels on the same scan line.

      On the C64, those bytes got stacked up to form a character, each byte on a sequential scan line, assuming one starts at a character boundary.

      • by Megane ( 129182 )
        I can see how that could be true and even sensible for a graphics mode on character-oriented display hardware, but the Apple II had that as part of its text mode row counters. IIRC from having seen the screen cleared many times long ago, hi-res did it with the same number of scan lines as text.
        • by hawk ( 1151 )

          for hires, rather than reading the same 40 bytes eight times in a row, and feeding to a character generator,eight different sets of 40 bytes were read (of which six set bits, and two danced around the colorburst signal. the pixel rate was just at the colorburst signal, so shifting half a bit tickled it and gave a different set of colors. Not just clever,but fiendeshly clever)

          hawk

    • by xiox ( 66483 )

      The Amstrad PCW also had a complex memory layout for the screen. It had a "Roller RAM" lookup table for each row which could be modified in order to achieve fast scrolling of the screen. The memory for a single character was also stored sequentially in the memory.

      • by Megane ( 129182 )

        Checking wikipedia, that was needed because the display was 100% bit-mapped. You just can't push that much data around on an 8-bit processor for vertical scrolling. On a TRS-80 Color Computer, if you used every register possible with PSHS/PULS, scrolling 6K of bitmapped video data 8 pixels at a time was almost tolerable. The Amstrad had 23K.

        And I'm still mad at myself for not buying up the two or three PCWs I saw at thrift stores back in the '90s, because I later found out that it used the same 3" floppy d

        • by hawk ( 1151 )

          And I'm kicking myself for not buying the used Apple ][ in a wooden case at the surplus store around the corner, which I've come to realize wasn't a ][ at all . . . :(

          hawk, who still has his 128k mac and 1802 wirewrap systems

  • Memories of programming with the 6502 instruction set are so delicious that the only comparable thing to compare it to was my first orgasm.

    I actually first did it on a General Electric GEPAC computer in 1966. It had an almost identical instruction set to the 6502, but with 24 bit words. Hip programmers expressed themselves in octal in those days.

    • by Anonymous Coward
      Memories of programming with the 6502 instruction set are so delicious that the only comparable thing to compare it to was my first orgasm. I actually first did it on a General Electric GEPAC computer in 1966.

      I assume you were polite and cleaned off the top of the machine when you were done?
    • Memories of programming with the 6502 instruction set are so delicious that the only comparable thing to compare it to was my first orgasm.

      Especially true when comparing to the 4 years later coming 8088 and its painfully segmented memory access.

  • Tandy Color Computer used the opposite side of the clock signal to generate video.
  • by pcjunky ( 517872 ) <walterp@cyberstreet.com> on Tuesday April 07, 2015 @08:39AM (#49421947) Homepage

    The video produced by the Apple II is not interlaced at all. Many video devices used to mix and overlay video in studios had trouble with this fact. True the video memory is not sequential but that's not the same thing as interlacing. Way back in 1983 I had lunch with Woz and a half dozen or so mostly game developers at the Independent Developrs conference. I asked him if he would want to change anything about the design of the II. He said he might add the two chips needed to make the video memory map sequential. Several of us including myself said that most of us would still use a lookup tables for updating video memory anyway (it was faster) and that didn't really matter much. In the end he agreed.

    As far as the 6502 being the shittiest processor of it's generation I would have to disagree. True it has fewer registers and instructions (RISC?) than most even older designs like the 8080, but it did have some unique adressing modes that made it the perfect processor for the graphics the Apple did. This coupled with the fact you can use the 256 bytes of zero page much faster and much like processor registers (indexed memory referencing) made it one neat machine.

    • by mark-t ( 151149 )

      This coupled with the fact you can use the 256 bytes of zero page ...

      Except for the anoying detail that if you wanted to be interoperable with anything that was written in Applesoft Basic and ProDOS, weren't very many of them to really play around with. [apple2.org.za]. One would usually have to resort to saving most of the entries they wanted to use, and then restoring them upon exit. This works, but I recall it wasn't very amenable to being interrupted with reset. While writing a custom reset handler mitigated some o

      • ProDOS

        ProDOS was a Disk Operating System, not a Language.

        Oh, and I wrote a preemptive RTOS in 6502 for my Apple ][-based Stage Lighting Controller back in 1982, using a hardware "time-slicer" (interrupt-generator) running at 2 KHz, long before I knew what an RTOS was.

        And I also wrote a "virtual-memory" system for Applesoft BASIC programs, that used the On-Error GoTo and "Ampersand Hook" to allow a programmer (we didn't call them "Developers" in those days!) to either write a program in "modular" form, or, in

        • by mark-t ( 151149 )

          You did preemptive multitasking on the Apple //? Way cool.... mine was strictly a cooperative multitasker, although it considered waiting for input (either from a remote connection or the keyboard) to be indicative that it was safe for the task requesting input to yield control. I had no hardware clock in my apple, so I could not do full-preemptive multitasking. As I said, when I was writing it I didn't even know the word 'multitasking' would describe what I was doing... I always described the mechanis

          • You did preemptive multitasking on the Apple //? Way cool.... mine was strictly a cooperative multitasker, although it considered waiting for input (either from a remote connection or the keyboard) to be indicative that it was safe for the task requesting input to yield control. I had no hardware clock in my apple, so I could not do full-preemptive multitasking.

            Thanks for the props, LOL!

            Looking back on it, It was actually pretty close to a true, modern RTOS, with semaphores and "mailboxes", "task-suspending", and the whole bit. I had 16 "slots" (threads) that could be managed at a time. I called the functions "TaskMaster", IIRC. It was born out of the need to have multiple asynchronous functions, such as crossfades, sequences (which I could even "nest" up to 8 levels deep!), and to manage the CHARACTER-BASED, OVERLAPPING "Windowing" system I created for it as we

      • by Megane ( 129182 )

        I started on the Z-80 and later had 6809, so I never could find much love for the 6502. But it started a revolution by being designed for high yield, and initially sold for $20 each quantity one when the 6800/8080/Z-80 processors were more like $200 each Q1.

        I once got to use an Ohio Scientific Challenger III. It had 3 processors, 6502, 6800, and Z-80, but the people who owned it only ever used the 6502 with a version of Microsoft BASIC. It supported multi-user by having a 48K RAM card for each user at 0000

        • by Dadoo ( 899435 )

          Wow, you don't come across people who've even heard of Ohio Scientific that often, much less actually used one. The first computer I ever used was a C2-OEM, with 8" floppies, and I have a (still working) C4P in my garage.

          • by hawk ( 1151 )

            >Wow, you don't come across people who've even heard of
            >Ohio Scientific that often, much less actually used one.

            *sigh*

            get off my lawn, I suppose. (I just reseeded it anyway)

            hawk, suddenly feeling old

      • by hawk ( 1151 )

        Prodos?

        PRODOS????

        damned newbies . . .

        hawk

  • I wonder how Woz feels about this kind of development. He has a /. account so if you read this: did you ever think there would be computers powerful enough and people interested enough to implement your brainchild on a credit card sized machine with different architecture within your lifetime. What do you think of the arduino movement in comparison with the DIY computer movement from our time?

    • I wonder how Woz feels about this kind of development. He has a /. account so if you read this: did you ever think there would be computers powerful enough and people interested enough to implement your brainchild on a credit card sized machine with different architecture within your lifetime. What do you think of the arduino movement in comparison with the DIY computer movement from our time?

      Well, considering that Apple pretty-much did an Apple-//e-on-a-chip [wikipedia.org] back in 1991, I'd say he'd be rather bemused.

      But supportive, nonetheless...

      • by hawk ( 1151 )

        Earlier than that.

        The Mac IIfx had a pair of chips each of which effectively had such a creature. One ran the serial/network ports, and I forget the other.

        Had apple sold that chip, combined with the network that ran on the second (unused) pair of standard home wiring, they could have *owned* home automation years ahead . . .

        hawk

  • I probably can't find the quote; but I distinctly remember reading an interview with Woz, stating (among other things), paraphrasing, "If I had known how popular the Apple ][ was going to be, I would have gone ahead and included the two extra chips it would have taken to make the video memory addressed sequentially."

    Instead, we had BASCALC and HBASCALC calls in the Apple Monitor ROM.

    And we liked it!
  • by blind biker ( 1066130 ) on Tuesday April 07, 2015 @11:43AM (#49423425) Journal

    This was easily the best, by far, technical article ever linked in a Slashdot submission.

    I just had to express my amazement. Holy shit, such deliciously nerdy article...

Avoid strange women and temporary variables.

Working...