Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Programming Software Hardware

Startup Claims C-code To SoC In 8-16 Weeks 205

eldavojohn writes "Details are really thin, but the EE Times is reporting that Algotochip claims to be sitting on the 'Holy Grail' of SoC design. From the article: '"We can move your designs from algorithms to chips in as little as eight weeks," said Satish Padmanabhan CTO and founder of Algotochip, whose EDA tool directly implements digital chips from C-algorithms.' Padmanabhan is the designer of the first superscalar digital signal processor. His company, interestingly enough, claims to provide a service that consists of a 'suite of software tools that interprets a customers' C-code without their having any knowledge of Algotochip's proprietary technology and tools. The resultant GDSII design, from which an EDA system can produce the file that goes to TSMC, and all of its intellectual property is owned completely by the customer—with no licenses required from Algotochip.' This was presented at this year's Globalpress Electronics Summit. Too good to be true? Or can we expect our ANSI C code to be automagically implemented in a SoC in such a short time?"
This discussion has been archived. No new comments can be posted.

Startup Claims C-code To SoC In 8-16 Weeks

Comments Filter:
  • by Ironchew ( 1069966 ) on Monday April 23, 2012 @06:12PM (#39776617)

    "Too good to be true?"

    Perhaps not, if you don't mind patent-encumbered chips with the occasional bug in them.

  • A better question (Score:5, Insightful)

    by wonkey_monkey ( 2592601 ) on Monday April 23, 2012 @06:30PM (#39776775) Homepage

    Or can we expect our ANSI C code to be automagically implemented in a SoC in such a short time?

    How about you tell us what SoC stands for first? Once again, editors, we don't all know everything about everything in the tech world. Some of us come here to learn new things, and you guys don't make it easy. TFS should at least leave me with an impression of whether or not I need to read the TFA.

  • by LurkerXXX ( 667952 ) on Monday April 23, 2012 @06:41PM (#39776865)

    The point is, you shouldn't have to freaking google to find out what the heck an article is about. The brain-dead submitter, or brain-dead 'editor' should be clarifying anything that isn't very common everyday tech lingo/acronyms.

  • by erice ( 13380 ) on Monday April 23, 2012 @06:44PM (#39776893) Homepage

    Most SOC's do a lot more than a direct translation of the c coded alogrithm would suggest. I guess if you had a "wrapper" platform that was good enough for many applications you could streemline the process. My guess that this platform and the links to C synthesis is most of Algotochip's secret sauce.

    C synthesis itself can't handle most programs writen in C. Essentially you need to write Verilog in C in order to make it work. Any dynamic allocation of memory, whether directly or indirectly, is a problem. IO can not be expected to work.

    So it boils down to: If you C source is uncharacteristicly just right and your application fits a pre-defined mold then you can make it a chip real quick. ..as long as you don't ecounter any problems during place and route or timing closure...

  • Re:Satish? (Score:1, Insightful)

    by Anonymous Coward on Monday April 23, 2012 @07:06PM (#39777065)

    Downmodded. How disingenuous of a site with so many programmers who know firsthand of the shit that comes out of India. They have a completely different culture than the US, and that is the cause of what we perceive as poor workmanship and poor management. Reputation doesn't seem to matter a lot to them. If that's not true, then please explain the apparent lack of quality. They memorize dumps to pass certification exams and then deliver poor product under poor management and get paid poor wages for it. And when pressed, they really genuinely don't seem to give a shit. Why?

    I'm sure the typical Japanese worker considers US workers lazy with an inflated sense of entitlement. And as a US citizen with a job, I'd agree with that assessment compared to the typical Japanese work ethic.

  • by DeadCatX2 ( 950953 ) on Monday April 23, 2012 @09:14PM (#39778139) Journal

    I'm curious, though... how would you convert unsigned to ASCII on chip?

    I think OP's point is that your average C programmer would just start doing all kinds of dividing; most of the time there is very little hardware support for division, and so if you fed this into a C->HDL converter it would generate massive bloat as it imported some special library to handle division.

    My first brute-force guess would involve a state machine (FSM), a comparator (16-bit), two adders (one 4-bit, one 16 bit), two muxes (16-bit and 4-bit, four input), a 16-bit register with clock enable and an associated input mux, and four 4-bit registers with clock enable. The FSM would control the 16-bit mux which selects a constant from four powers of 10 (10,000 to 10), and the output of the mux is connected to the 16-bit adder and the comparator. The other input is the 16-bit register, which also needs a mux for selecting between the argument and the adder's output. This register output is also a comparator input. The comparator is configured for "less than" and its output goes to the FSM so it can make decisions. The FSM also controls a 4-bit wide mux which connects four 4-bit registers that represent the various 10s digits (10,000 to 10) to an adder with the other input set to "1".

    1) If the number is greater than 10,000 then inc the "ten-thousands" digit, subtract 10,000 from the argument, and repeat this step.
    2) Once it is less than 10,000 then the state machine would walk forward to the thousands digit
    3) If the number is greater than 1000, inc the thousands digit, subtract 1000 from the argument, and repeat this step.
    4) Once it is less than 1000... (you can extrapolate some here) ...
    n) Once the tens digit has been processed, the remaining argument is the ones digit

    This would give you a series of 4-bit numbers. Once the FSM is done (it's important for it to finish first and change all bits simultaneously, so that downstream logic doesn't see glitches), it would append 0x3 to the front of each 4-bit number, turning them into ASCII.

    Note that this approach requires very little in terms of hardware resources, at the expense of requiring a variable amount of time to process its inputs. Consider that 00000 would take 6 clock cycles to produce (need a cycle to load the input), while 29,999 would require like 33 clock cycles (no need to do subtractions on the ones digit)

    There are other approaches that may be faster in exchange for requiring more hardware. Consider if you had 9 comparators, one for each digit (except 0), and an adder with a 9-input mux; every input would require 6 clock cycles. But this took an extra 8 comparators (and a significantly bigger mux too); size for speed (interestingly, the divider still only gets you 6 clock cycles, and probably takes up many more resources than 9 comparators. But if you could find other work for the divider then time-sharing might make it worth your while, maybe). You could even go all the way and use 32,000+ comparators, if fan-out wouldn't spell doom for such an approach, and then you could always calculate every possible value in 1 clock cycle...but this would require MASSIVE resources. Now if you only needed, say, from 0 to 1000, that might be slightly less unreasonable (perhaps within fanout limitations but probably still unreasonably large).

    OPs point is that a good hardware engineer knows about these tradeoffs and handles them appropriately, while a C programmer isn't trained to think about these issues and their language doesn't even naturally express the structures that it will be mapped on to. Writing the kind of C code that you need to properly synthesize what you want feels like saying the alphabet backwards while jumping up and down on one foot while rubbing your belly and patting your head. And that's if you can even figure out how to tell the C synther that since your values only go from 0 to 1000 that it doesn't need all 16-bits of that unsigned short and it could really get away with only 10 bit support.

  • by EdIII ( 1114411 ) on Monday April 23, 2012 @10:03PM (#39778399)

    Do we need to start having a basic competency test before letting idiots like this post? Jesus fuck, you newtards are idiots. No wonder CmdrTaco left...

    That's hugely unfair. I figured out what it was based on the context. Hmmmm... SoC.. moving algorithms to chips... might it be System-On-Chip?

    However, there are plenty of articles here about some pretty heavy physics, particle physics, medical advancements, etc. that are well outside of my own field. It would be nice to have some quality journalism where a term or concept is explained in the summary.

    It's not that hard. Another sentence at most. I don't have a problem searching for terms and concepts I don't fully grasp, but it would be nice to have some quality journalism again. Seriously.... grammar and spelling mistakes everywhere now, even at mainstream outlets like CNN. Just once I would like the impression that somebody with an English major was doing actual editing.

  • by savuporo ( 658486 ) on Tuesday April 24, 2012 @12:59AM (#39779141)
    Friggin SoC has been in everyday tech lingo forever. If the crowd here doesn't get it, it just shows how far from the actual geek audience has slashdot gone.
  • by Darinbob ( 1142669 ) on Tuesday April 24, 2012 @02:06AM (#39779397)

    Now the snag is trying to find any of these twenty something coders who know C.

HELP!!!! I'm being held prisoner in /usr/games/lib!

Working...