Building a 32-Bit, One-Instruction Computer 269
Hugh Pickens writes "The advantages of RISC are well known — simplifying the CPU core by reducing the complexity of the instruction set allows faster speeds, more registers, and pipelining to provide the appearance of single-cycle execution. Al Williams writes in Dr Dobbs about taking RISC to its logical conclusion by designing a functional computer called One-Der with only a single simple instruction — a 32-bit Transfer Triggered Architecture (TTA) CPU that operates at roughly 10 MIPS. 'When I tell this story in person, people are usually squirming with the inevitable question: What's the one instruction?' writes Williams. 'It turns out there's several ways to construct a single instruction CPU, but the method I had stumbled on does everything via a move instruction (hence the name, "Transfer Triggered Architecture").' The CPU is implemented on a Field Programmable Gate Array (FPGA) device and the prototype works on a 'Spartan 3 Starter Board' with an XS3C1000 device available from Digilent that has the equivalent of about 1,000,000 logic gates, costing between $100 and $200. 'Applications that can benefit from custom instruction in hardware — things like digital signal processing, for example — are ideal for One-Der since you can implement parts of your algorithm in hardware and then easily integrate those parts with the CPU.'"
That instruction is .......... (Score:5, Insightful)
0x2A
That is the ultimate instruction.
Re: (Score:2)
HP 1000 (Score:2)
Re: (Score:3, Funny)
Unless of course, the ultimate question really is 'What is 6 times 9?' as some people believe (meaning 42 is base 13 for some unknown reason). Which would of course make the ultimate instruction 0x36.
Re:That instruction is .......... (Score:5, Informative)
Hence the '42 is in base 13' part of my comment. 42(base 13) == 54(base 10) == 36(base 16). Of course, Adams himself denied this was the case... "No one writes jokes in base 13" but after this theory emerged he did work it into some of his later jokes, probably just to keep people wondering.
I'd settle for base 2 (Score:5, Funny)
All this talk about 13th Base makes me jealous, 'cause I've never even got to 2nd Base yet. I'll have to die first and go to heaven before I'll get to 13th Base with a chick.
Re: (Score:2)
Re: (Score:3, Funny)
Appropriate that the ultimate instruction would also be a wildcard (*) in ASCII.
And speaking of your drums, on Apple II, it's rotate accumulator left, the ROL instruction.
How curious.
Re: (Score:2)
Wow! It's the same as on a VIC-20, wonder why.
Re: (Score:2)
Fail! That's not even a 32-bit instruction. Everyone knows the ultimate instruction is 0xDEADBEEF!
Re: (Score:3, Funny)
Re: (Score:3, Funny)
That's the only thing you can get at the 0xFEEDCAFE
Re: (Score:3, Funny)
Re: (Score:3, Funny)
Is that where they sell 0xBADC0FEE?
Re:That instruction is .......... (Score:5, Funny)
Re:That instruction is .......... (Score:5, Funny)
This thread can be categorized as 0xNONEOFTHISISFUNNY
I don't get it.
That's not a valid hexadecimal number.
Re: (Score:2)
Fun fact: All Java class files start with 0xCAFEBABE - I think Gosling had a crush on someone? Ah romantic hex.
Re: (Score:3, Interesting)
It's got only one instruction. ...and the first parameter to that instruction controls what the instruction does with the rest of the parameters.
(p.s. I wish this was just a joke, but this is pretty much what it seems to be doing)
Re: (Score:2)
0x2A
Took me a while to see the Hitchhiker reference.
Very interesting, though, that 0x2A is SUB r8 r/m8 in the x86 instruction set. Isn't a Turing difference machine just subtraction with two read/write heads and a linear medium?
Pardon me for injecting something serious, but... (Score:3, Insightful)
Re: (Score:3, Insightful)
Re: (Score:3, Funny)
You mean 10 buttons?
"ideal for One-Der"? (Score:5, Insightful)
Re: (Score:3, Interesting)
But most FPGAs utilize a CPU core, which is often hard-wired and has ports to access the programable elements. Assuming the single-instruction MIPS runs faster than the common 'standard' CPUs such as PowerPC, then there would be a benefit. The CPU could be smaller (leaving more space for programmable elements) and more easily expanded upon (run additional functions by address rather than by OPCODE).
That's a big 'if', but there's merit in exploring it. The biggest barrier I can think of right now is wit
Re: (Score:2)
Re: (Score:3, Interesting)
FPGA is usually the prototype phase.
Actually, this could be implemented as a really small handful of transistors for the actual processor and a ton of various memory-mapped peripherials. Some of them being really simple old basic logic chips for ALU.
It would mean a simple version for cheap microcontrollers would be really cheap to make, a family of compatible devices of different scale would be possible, and extending/upgrading existing instruction set would be easy too.
The above is not a conflicting statem
nihilist (Score:5, Insightful)
vaguely reminds me of the nihilist language joke. A language that realizes that ultimately all things are futile and irrelevant, thus allowing all instructions to be reduced to a no-op.
Re:nihilist (Score:4, Funny)
... and then it does dead code elimination, right?
Re: (Score:2, Funny)
He's Building a One-Der, Stop Him (Score:5, Funny)
Re:He's Building a One-Der, Stop Him (Score:4, Funny)
Re:He's Building a One-Der, Stop Him (Score:5, Funny)
Some of us are still addicted you insensitive clod!
Cheating? (Score:5, Insightful)
Re: (Score:2, Interesting)
Erm, no. The canonical single instruction machine uses "subtract and branch if negative" and that's not considered to be three instructions (subtract, test, branch) but one.
Using memory-mapped facilities to perform operations like addition...now THAT is cheating.
Re: (Score:3, Informative)
Using memory-mapped facilities to perform operations like addition...now THAT is cheating.
Isn't that what it does?
Strikes me that that is just complicating things, insofar as you still effectively have multiple instructions, there is just another semantic level tacked on to hide them.
Re:Cheating? (Score:5, Interesting)
I'd also consider it cheating. I can also invent a one-instruction computer, where the one instruction is a move immediate instruction. The move instruction moves a byte-sized value into a "command register" which does different things depending on the value of the byte you load into it and the current state of the machine. Indeed, since there's just one instruction, and it always has a single one-byte operand, I just don't encode the instruction itself, I just put all the operands into memory, one after another. And I define the state machine so that the actions are exactly the same as the actions of an x86 interpreting those bytes as separate instructions. Therefore I can avoid doing an implementation myself; I can just use a stock x86 processor as proof of concept.
Re: (Score:3, Informative)
So the one instuction is essentially a move command that has multiple modes... Ahem. Isn't that cheating?
Yes, it is cheating. He basically took the instruction bits of the program and said, "Behold, for they are now address bits!" With the caveat that the address bits happen to address INSTRUCTIONS. It's all pretty brain-dead.
Re: (Score:3, Interesting)
I suppose it's cheating. I think it's useful though simply as a backbone for a custom processor, then patch in what you need. You might need an ALU and DSP for a complex project, and an accumulator & bit shifter for a simpler one. This lets you link them to a common bus architecture which could make for easy prototyping.
GOTO ... (Score:5, Funny)
I vote for GOTO as the only instruction.
That would be hilarious.
Cheers
Re: (Score:2)
Actually, it sounds an awful lot like a COME FROM instruction.
http://en.wikipedia.org/wiki/Come_from [wikipedia.org]
Re: (Score:3, Funny)
Well, if we're going with joke operations, I'm changing my vote to HCF [wikipedia.org]. ;-)
Cheers
Re: (Score:2)
COME FROM is a joke, but it really does kind of model what they're doing here. The "one instruction" is a MOVE, but every time you move something to a particular location, some other part of the chip notices and performs operations on it.
Move one number into location A and another into location B, and magically the CPU knows to multiply them and stick it in location C, like sticking a COME FROM at the instruction that stores into B. It's not exactly a COME FROM since control returns to your next instructi
Re: (Score:2)
I think I was in a high school 'comp sci for dummies' class based on that principle. You would be surprised how much qBasic can do with generous use of GOTO.
Well, at least, it seemed like an impressive program at the time. Good thing I don't write code for a living!
Re: (Score:2)
goto:
goto goto
?
Re: (Score:2)
Re: (Score:2)
that would be the JMP instruction. =P
Re: (Score:2)
Or B (branch), depending on your architecture.
Cheers
The first language I ever saw was GOTO only (Score:2)
The language of naughty schoolboys was goto-only. However, it never fulfilled on its promise of naked chicks if you turned to page 69. Some of the programs written in said language were, however, quite humerous and complex. You could implement loops in that language of course, and perhaps even keep an idiot busy for hours. I'm not sure if it was Turing complete though.
I prefer HALT (Score:2)
The only valid program is a single HALT instruction.
Can be a bit tricky to program... (Score:5, Interesting)
Re:Can be a bit tricky to program... (Score:5, Interesting)
Re: (Score:2)
What you say is true, but the most important part, which you leave out, is that managing to write a compiler is crazy difficult for some instructions sets. x86 is not around today because it's technically superior; it's with us because the compilers for better architectures are just too damn hard to write.
the amazing zit shrinking cream (Score:5, Interesting)
x86 is with us because of backwards compatibility. even Intel were unable to shrug it off with Itanium and various other things.
x86 is still with us because is-gross turned out to be 20% is-gross and 80% with-gross. The 20% that actually is-gross has been a minor cross to bear, the other 80% was relegated to traps, microcode, and emulation. The most ridiculous CISC instruction from 1980 is a pimple on a bedbug in silicon area thirty years later. Moore's law: the amazing zit shrinking cream.
you almost need a different compiler for each generation of CPUs
If your compiler doesn't work well on a 486, it's badly broken. Since then, there have been two different approaches by Intel which annoy the compiler gods: the Pentium and Pentium IV which place a premium on low level instruction scheduling, and everything else, starting with the Pentium Pro and including the Core Duo, all non-deterministic data-flow architectures at heart.
The main differences in a good Pentium Pro compiler was a few hazard-aware instruction order tweaks, mostly focused on the complex/simple/simple instruction decode architecture. Hand tweaking for the Pentium Pro did not offer as much as with other architectures. It was hard to gain complete control for cycle precise scheduling, and the OOO logic did a good job of mitigating dependency chains on the fly: you neither had a large problem to solve, nor much control in solving it.
There's a rumour the trace cache is making a reappearance in Sandy Bridge, so perhaps the pendulum is swinging back to the Pentium/Pentium IV side of the fence.
A long time ago I read some long papers on TTA, around the time Intel went the wrong direction with Itanium (defining bundles as a unit of independent instructions, rather than bundles as units of dependent instructions).
What makes TTA interesting is having many buses, with as many buses utilized on each clock cycle as possible. This guy has not invented an instruction set. He has invented a microcode engine. In doing so, he's muddied the notion of processor state, so there's no abstraction for handling interrupts. The great thing on an FPGA is that you can program around the need for interrupts, if you can devote a small core to each concurrent task.
Real microcode instructions tend to have very long bit vectors, so that multiple buses can be coordinated on the same clock cycles. If you aren't trying to throw maximal resources at a single, dominant task, you can instead have many concurrent execution engines, each with a single function unit bus. This works for some applications.
My feeling about Itanium is that it should have allowed instruction clusters such as complex multiply in a single bundle.
r = ac - bd
i = ad + bc
This requires four inputs from the register file, two outputs to the register file, four multiplications, and two additions. You can find many examples in TAOCP V4F1 of small instructions clusters of this nature. A single eight byte bundle will be hard pressed to encode six arbitrary registers from a 256 register set, but I would argue that you don't need to. Compilers are extremely clever at register colouring, so a clever subset of full generality would prove more than adequate. Hint: invent the compiler and prove this, before committing the design to silicon.
From a TTA perspective, such a bundle achieves six operations at the expense of just four reads and two writes to the shared register file, with some intermediate results briefly shunted on local sidings. Managing the local sidings introduces some non-determinism from the perspective of the compiler, but nowhere near the scope of OOO shunting overhead in the Pentium Pro.
I think the Itanium design fell victim to ATM logic: determinism at the expense of higher aggregate throughput in the common case. That bet rarely pays off. They tricked themselves into believing they could bet against the grain by shuffling the downside of this fictio
Re: (Score:2)
So... how did you encode what operation the ALU should perform? And wouldn't that then be the ISA? Couldn't you then make a "one-instruction microprocessor" where the only instruction is "move bytes to x86 processor instruction cache"? ;)
Or was each possible ALU operation a different memory-mapped address? Was writing the operands to the addresses what caused the operation, or did you have to write to a "do-it" ?
Not that making such a processor isn't cool. Cus it's cool. Making just about any kind of
Re: (Score:3, Interesting)
Interesting.
First off, your one-instruction CPU, I guess you didn't need to express the instruction in machine code, just the arguments.
Here's the funny question, why not develop an assembler with synthetic instructions, like SPARC v9? It would certainly make it easier to program.
Re: (Score:2)
It seems to me that the the distinction between the microprocessor and the ALU is arbitrary. How is this different than a CPU that comprises address load hardware and ALUs?
Wrong part number in summary (Score:5, Insightful)
It's XC3S1000, not XS3C1000. Been working with these parts too long...
So old it's new. (Score:5, Insightful)
Sounds a hell of a lot like the read/write head of the Turing Machine to me.
What's the one instruction? (Score:5, Funny)
Why, DWIW (Do What I Want), of course.
Re:What's the one instruction? (Score:5, Funny)
get me a sandwich is not in the sudoers file. This incident will be reported.
One instruction... (Score:4, Insightful)
... whose first operand is the task to perform. Followed by the necessary operands for that task.
Re:One instruction... (Score:5, Interesting)
... whose first operand is the task to perform. Followed by the necessary operands for that task.
Exactly. It isn't a single instruction computer.
And the idea isn't new.
If a single instruction architecture is designed, then there is only one instruction (duh), and there's no reason to encode that instruction in the instructions themselves. All that will be left is encoding for operands. There's a tempting but brief foray into semantics where you can argue that the first handful of bits in TFA's instruction set are operands to the execution control unit, but that is, in fact, what most would consider defining a set of instructions where each distinct value in that first handful of bits describes more-or-less a distinct instruction. One quickly realizes, however, that there is a fundamental difference between data operands and instruction operands, and, by stating that it is a single instruction architecture, the implication is that there are no instruction operands. Therefore, TFA's architecture is not single instruction.
It's well known that there are universal logic elements (like the two-input NOR gate), and, by extension, you can create single instruction architectures that compute the universal logic element operation on two arguments, writing the results to a third. Instructions in such architectures are just memory locations -- source A, source B and destination. While incredibly simple, such a machine is going to have a very, very low instruction set density. It's an interesting project for intellectual curiosity (like in an introductory graduate level machine architecture course) but hardly worthy of a Slashdot front page mention.
Not the Ultimate RISC Architecture (Score:2)
That would be the system with no instructions at all!
Re:Not the Ultimate RISC Architecture (Score:5, Interesting)
http://en.wikipedia.org/wiki/Zero_instruction_set_computer [wikipedia.org]
Re: (Score:2)
Cruel of you to spoil the joke.
Re: (Score:3, Funny)
It's obvious (Score:2)
The instruction is "What is 7 times 9"
Re: (Score:2)
I knew something was wrong.
Memory of this from Engineering School (Score:4, Funny)
AAA AA A A (Score:5, Funny)
Re:AAA AA A A (Score:5, Funny)
Compile error. Instruction "A" missing after "A".
Re:AAA AA A A (Score:5, Funny)
Press the key to continue.
Not new, and not too useful (Score:5, Interesting)
That's an old idea. [wikipedia.org] The classic "one instruction" is "subtract, store, and branch if negative". This works, but the instructions are rather big, since each has both an operand address and a branch address.
Once you have your one instruction, you need a macroassembler, because you're going to be generating long code sequences for simple operations like "call". Then you write the subroutine library, for shifting, multiplication, division, etc.
It's a lose on performance. It's a lose on code density. And the guy needed a 1,000,000 gate FPGA to implement it, which is huge for what he's doing. Chuck Moore's original Forth chip, from 1985 [ultratechnology.com] had less than 4,000 gates, and delivered good performance, with one Forth word executed per clock.
Re: (Score:2)
Let Chuck Norris create 'the instruction'
Re: (Score:2)
Fewer instructions does not always mean that the CPU architecture gets more optimized.
To my knowledge, in terms of gatecount this is the most efficient CPU around:
http://www.opencores.org/project,mcpu [opencores.org]
32 Macrocells in a CPLD.
RISC vs CISC - sigh (Score:2, Informative)
if the instruction is NAND (Score:2)
Re: (Score:2)
That was exactly my first thought.
Re: (Score:2)
One thing i don't get is why it hasn't been done? The wikipedia article on the one instruction computer doesn't list a bitwise AND instruction as being one of the possible instructions for a one instruction computer.
http://en.wiki [wikipedia.org]
"One-der" (Score:4, Insightful)
The hyphen being so everyone doesn't call it "The O-need-er", as in That Thing You Do.
Far better embedded CPUs... (Score:2)
All the FPGA vendors have their own embedded CPU cores, such as the Xilinx Microblaze [wikipedia.org] and the Altera NIOS II [wikipedia.org] which are very small FPGA-embeddable CPU cores.
You also have free options that aren't tied to specific FPGAs like the LEON sparc-compatible processors [wikipedia.org].
Used to work for someone doing this (Score:2)
One command? (Score:3, Interesting)
Reminds me of this old saying,
"Every program can be reduced by one instruction, and every program has at least one bug. Therefore, any program can be reduced to one instruction which doesn't work."
I just wish I knew who came up with it.
According to MS the instruction is (Score:3, Funny)
I think it's misleading to call it 1 instruction (Score:3, Insightful)
Re: (Score:2)
Which just goes to show how shockingly ignorant 'many of us' are.
Now if 'many of us' (brighter than the norm, or so the theory goes) can be so ignorant - why do we laugh at Joe Sixpack?
It's not working (Score:2)
Re: (Score:2)
Isn't it cheating? (Score:2)
This is progress? (Score:2)
2009: one million gates, one instruction, RISC, gnarly to program = 10 MIPS.
1984: 200,000 gates, gobs of instructions, CISC, easy to program = 10 MIPS.
We should have more to show for the last twenty-five years in microprocessor design.
Geez, History Repeats Itself (Score:2, Informative)
One instruction 2000 addressing modes. (Score:2)
wonder how many addressing modes there are...
One instruction to rule them all! (Score:2)
1) "KILL ALL HUMANS!"
Great, but... (Score:2)
... is it pronounced "O-knee-der" or "O-ned-der"?
You know, every time it does that thing it does.
Could really crank up the speeds (Score:4, Funny)
Oh my, you'll never believe what I'm about to say (Score:5, Interesting)
A cousin of mine (Howdy Rusty!) described this concept to me in the '70s while I was taking classes toward my CS degree.
A little background: I went to the good old University of Utah which had a Boroughs 1700 with user writable microcode and so a lot of project centered around writing microcode and designing micro architectures. A friend was trying to code up a single instruction machine based on Curry Combinators. I thought he was nuts, but I liked the idea of a single instruction machine. So, I was talking to my cousin and he described an architecture that had one instruction that was a source and a destination address. Any address could be either memory or a register in a functional unit, an FU for short. No kidding, that is how he described it.
The only trouble was trying to figure out how to do a conditional branch.
A few years later while I was in gradual school I solved that problem and wrote paper about it. Being a gradual student I could not publish without permission from my adviser. Well, he got a good laugh out of the idea and told me not to show it to anyone. So, of course I sent it to everyone I knew. They all had a good laugh to. Said it was the funniest thing I had ever written. You see, I was into writing humorous stories at the time and people thought this was another one. Oh well, I have a print out of the thing around here somewhere.
What I really liked about the architecture is that if you started modifying it to make it more economical, doing things like making the addresses have different lengths and adding a bit to tell you if the long address is the source or the destination, the move architecture starts looking more and more like a classic instruction set architecture. I thought that was very cool. When you look at micro coded architectures and think about a pure move based processor it really does look like all traditional architectures are attempts to make the one instruction machine make more economical use of instruction bits.
So, how did I solve the conditional branch problem? Pretty much the way this fellow did. Every FU may, or may not, cause condition flags to be set. I added registers where you could read and write the condition bits and read and write the program counter. I also added a mask register that was anded with the condition register so you could enable and disable conditions. Then I just made the current instruction conditional on the values of the flags register anded with the mask register. If the result was non-zero the current instruction was skipped. Of course, the machine had to clear the condition register after each instruction was executed. (Hmm, it would make more sense to only make moves to the program counter conditional and it would make more sense to only clear the flags after a move to the instructions counter... Hey was a gradual student back then! :) That approach allowed you to select say the sign bit from one ALU, do an subtraction by moving values to two registers in the ALU, then jump if the sign bit is set. It also let you directly make any instruction conditional so you could implement something like the ABS() function without any jumps. Or, at least that was the idea.
I called my one instruction: The Conditional Move From Here To There And Clear Flags, or TCMFHTTACF insturction. The assembly for it was really dull, it just always had the same op code down the left hand edge of the screen... Ok, really, I just never listed anything but addresses when I wrote code for it.
Nice to see that someone actually built one of these. BTW, this kind of architecture makes it easy to add multiple execution units. With parallel execution and careful use of shared and private FUs and memories you can build a pretty damn powerful special purpose processor without a lot of hardware complexity.
This just to damn cool... someone finally built it!
Stonewolf
Re: (Score:2, Informative)
This isn't true. Modern processors are highly RISCy -- they just have front-ends that translate from CISC ISAs. The last genuinely CISC processor was, I believe, the Pentium (non-pro edition).
Re:Ummmm (Score:5, Informative)
Is it just me, or does this sound like RISC fanboyism from the 1990s? The "advantages" of RISC are not nearly so clear these days. Indeed, it is getting rather hard to find real RISC chips. While there are chips based on RISC ISA idea (like being load/store and such), they are not RISC. RISC is about having few instructions and instructions that are simple and only do one thing. Those concepts are pretty much thrown out when you start having SIMD units on the chip and such.
I wouldn't say that's what RISC was about at all; the basic idea was to have only instructions that could be implemented using a few simple pipeline stages. This is a substantial improvement over the microcoded architectures that were prevalent prior to RISC, because it can be much more easily pipelined (or, indeed, pipelined at all). I don't see SIMD as incompatible with RISC in any fashion; it just happens that the instruction operates on very wide data, but it's still a relatively simple instruction that should be able to complete quite quickly.
These days complex processors are the norm. They have special instructions for special things and that seems to work well. RISC is just not very common, even in systems with a RISC heritage.
I'd say it's more the other way around. Even in systems with a CISC ISA (e.g. x86), you tend to find that under the hood the CISC instructions are translated into a series of microops that are then dispatched in a system that is somewhat RISC-like. The most common processor family in the world is the ARM family, and all of those processors subscribe pretty well to the original principles of RISC, from instruction set to internal design of the processor core.
All of these are much more faithful to the principles of RISC than the chip described in TFA, whose instruction performs two memory accesses on each execution -- note that the removal of such instructions and consequent simplification of the execution pipeline (by having only a single pipleline stage that could access memory) was the original motivation behind RISC architectures.
Re: (Score:2)
Is it just me, or does this sound like RISC fanboyism from the 1990s?
The good thing about fashion is that if you wait long enough, everything will be in vogue again. I'm just waiting for the day when my crates of punch cards will be in demand by everyone. My great grand-children will surely respect me THEN.
Re: (Score:2)