Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Iphone Hardware Apple

iPhone 5 A6 SoC Teardown: ARM Cores Appear To Be Laid Out By Hand 178

MrSeb writes "Reverse engineering company Chipworks has completed its initial microscopic analysis of Apple's new A6 SoC (found in the iPhone 5), and there are some rather interesting findings. First, there's a tri-core GPU — and then there's a custom, hand-made dual-core ARM CPU. Hand-made chips are very rare nowadays, with Chipworks reporting that it hasn't seen a non-Intel hand-made chip for 'years.' The advantage of hand-drawn chips is that they can be more efficient and capable of higher clock speeds — but they take a lot longer (and cost a lot more) to design. Perhaps this is finally the answer to what PA Semi's engineers have been doing at Apple since the company was acquired back in 2008..." Pretty picture of the chip after using an Ion Beam to remove the casing. The question I have is how it's less expensive (in the long run) to lay a chip out by hand once instead of improving your VLSI layout software forever. NP classification notwithstanding.
This discussion has been archived. No new comments can be posted.

iPhone 5 A6 SoC Teardown: ARM Cores Appear To Be Laid Out By Hand

Comments Filter:
  • News For This Nerd (Score:5, Interesting)

    by History's Coming To ( 1059484 ) on Tuesday September 25, 2012 @07:24PM (#41457547) Journal
    Brilliant, this is what I love about Slashdot, I can be the biggest geek in whatever field I pick and I will still get outgeeked! I enjoyed reading the comments above mostly because I have absolutely no idea what the detail is, and I'd never even realised that hand-drawn vs machine was a issue.

    Can anyone supply a concise explanation of the differences and how it's all done? I'm guessing we're talking about people drawing circuits on acetate or similar and then it's scaled down photo-style to produce a mask for the actual chip?

    Yes, I know I can just Google it, and I will, but as the question came up here I thought I'd add something to a real conversation, it beats a pointless click of some vague "like" button any day :)
  • by Wierdy1024 ( 902573 ) on Tuesday September 25, 2012 @08:36PM (#41458255)

    When someone buys a design from ARM, they buy one of two things:

    1. A Hard macro block. This is like an mspaint version of a cpu. it looks just like the photos here. The CPU has been laid out partially by hand by ARM engineers. The buyer must use it exactly as supplied - changing it would be neigh-on impossible. In the software world, it's the equivalent of giving an exe file.

    2. Source Code. This can be compiled by the buyer. Most buyers make minor changes, like adjusting the memory controller or caches, or adding custom FPU-like things. They then compile themselves. Most use a standard compiler rather than hand-laying out the stuff, and performance is therefore lower.

    The articles assertion that hand layout hasn't been done for years outside intel as far as I know is codswallop. Elements of hand layout, from gate design to designing memory cells and cache blocks have been present in ARM hard blocks since the very first arm processors. Go look in the lobby at ARM HQ in Cambridge UK and you can see the meticulous hand layout of their first cpu, and it's so simple you can see every wire!

    Apple has probably collaborated with ARM to get a hand layout done with apples chosen modifications. I can't see anything new or innovative here.

    Evidence: http://www.arm.com/images/A9-osprey-hres.jpg [arm.com] (this is a layout for an ARM Cortex A9)

  • Huh? (Score:5, Interesting)

    by Panaflex ( 13191 ) <<convivialdingo> <at> <yahoo.com>> on Tuesday September 25, 2012 @11:01PM (#41459551)

    Not surprising at all, as PA SEMI was founded by Daniel W. Dobberpuhl.

    Daniel Dobberpuhl had his hand in StrongARM and DEC Alpha design - both hand-drawn cores which to this day command some respect in chip design circles I'm told.

    Anyway,

  • by the_humeister ( 922869 ) on Wednesday September 26, 2012 @12:43AM (#41460113)

    It would probably be next to impossible to write an entire modern operating system, web browser, or word processor in assembly language.

    Here you go [menuetos.net]. It's pretty impressive for something written entirely in assembly .

  • by Theovon ( 109752 ) on Wednesday September 26, 2012 @12:53PM (#41465603)

    I'm a chip designer too (although probably not as good as you are), and one thing I wanted to mention for the benefit of others is that in today's chips, circuit delays are dominated by wires. It used to be dominated by transistor delays now, but today, a long interconnect in your circuit is something to avoid at all costs. So careful layout of transistors and careful arrangement of interconnects is of paramount importance. Automatic layout tools use AI techniques like simulated annealing to take a poorly laid-out circuit and try to improve it, but they're even now still poor at doing placement while taking into account routing delays. Placement and routing used to be done in two steps, but placement can have a huge effect on possible routing, which dominates circuit delay. Automatic routers try to do their jobs without a whole lot of high-level knowledge about the circuit, while a human can be a lot more intelligent about it, laying out transistors such with a better understanding of the wires that will be required for that gate, along with the wires for gates not let laid out.

    Circuit layout is an NP-hard problem, meaning that even if you had the optimal layout, you wouldn't be able to determine that in any simple manner. Computers use AI to solve this problem. There is no direct way for a computer to solve the problem. So until we either find that P=NP or find a way to capture human intelligence in a computer, circuit layout is precisely the sort of thing that humans will be better at than computers.

    Compilers for software are a different matter. While some aspects of compiling are NP-complete (e.g. register coloring), many optimizations that a compiler handles better are very localized (like instruction scheduling), making it feasible to consider a few hundred distinct instruction sequences, if that's even necessary. Mostly, where compilers beat humans is when it comes to keeping track of countless details. For instance, with static instruction scheduling, if you know something about the microarchitecture of the CPU that informs you about when instruction results will be available, then you won't schedule instructions to execute before their inputs are available (or else you'll get stalls). This is the sort of mind-numbing stuff that you WANT the computer to take care of for you. Compilers HAVE been getting a lot more sophisticated, offering higher-level optimizations, but in many ways, what the compiler has to work with is very bottom-up. You can get better results if the human programmer organizes his algorithms with knowledge of factors that affect performance (cache sizes, etc.). There is only so much whole-program optimization can do with bad algorithms.

    Interestingly, at near-threadhold voltages (an increasingly popular power-saving technique), circuit delay becomes once again dominated by transistors. When lowering supply voltage, signal propagation in wires slows down, but transistors (in static CMOS at least) slow down a hell of a lot more.

I've noticed several design suggestions in your code.

Working...