Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Programming Software Hardware

Startup Claims C-code To SoC In 8-16 Weeks 205

eldavojohn writes "Details are really thin, but the EE Times is reporting that Algotochip claims to be sitting on the 'Holy Grail' of SoC design. From the article: '"We can move your designs from algorithms to chips in as little as eight weeks," said Satish Padmanabhan CTO and founder of Algotochip, whose EDA tool directly implements digital chips from C-algorithms.' Padmanabhan is the designer of the first superscalar digital signal processor. His company, interestingly enough, claims to provide a service that consists of a 'suite of software tools that interprets a customers' C-code without their having any knowledge of Algotochip's proprietary technology and tools. The resultant GDSII design, from which an EDA system can produce the file that goes to TSMC, and all of its intellectual property is owned completely by the customer—with no licenses required from Algotochip.' This was presented at this year's Globalpress Electronics Summit. Too good to be true? Or can we expect our ANSI C code to be automagically implemented in a SoC in such a short time?"
This discussion has been archived. No new comments can be posted.

Startup Claims C-code To SoC In 8-16 Weeks

Comments Filter:
  • by Ironchew ( 1069966 ) on Monday April 23, 2012 @05:12PM (#39776617)

    "Too good to be true?"

    Perhaps not, if you don't mind patent-encumbered chips with the occasional bug in them.

    • by AK Marc ( 707885 ) on Monday April 23, 2012 @05:16PM (#39776657)
      Well then, fix it with your own open source chip printer. 8-16 weeks? 5 minutes is long enough. Compile, spool, print.
      • by solidraven ( 1633185 ) on Monday April 23, 2012 @11:20PM (#39778991)
        Any mistake in a SoC is expensive, especially if you go directly from design to wafer without extensive testing. Most of the time it's 1 week of actual writing a description in VHDL or Verilog and then spending a few weeks/months verifying the design and removing any bug. After you're done verifying you get to enjoy the trouble of mapping it. And since automation sucks for certain things, especially when the analogue signals are supposed to remain as noise free as possible, you then get to spend a few days doing the layout and verifying if the layout is actually the same as the netlist.
        While this sounds nice, it's not the first "C to Silicon" program out there (Cadence beat them to it). And it certainly won't be the last. The thing is, the reason to use VHDL and to some extent Verilog is to minimize the occurrence of errors. Even when you verify your design bugs can still slip through. But due to the overall design of the language this is far less likely in VHDL.
        • Any mistake in a SoC is expensive, especially if you go directly from design to wafer without extensive testing. Most of the time it's 1 week of actual writing a description in VHDL or Verilog and then spending a few weeks/months verifying the design and removing any bug.

          See, this is what's confusing me. Isn't a system-on-a-chip just CPU+GPU+soundset+RAM+flash on one chip? Is there any real hardware to implement? The whole summary makes it sound like they've implemented... a C compiler.....

          • by gtall ( 79522 )

            SoCs these days include a fair amount of FPGAs. So it is more likely they are just producing FPGA code. Xilinix has FPGAs with ARM chips on them, or you can use soft PPC processors, not sure if they have soft ARM processors yet but they are likely not far in the future if they do not have them already.

            Right now, you could produce FPGAs using soft processors to implement C code using Xilinix tools. However, like the gp says, that ain't all there is to it.

            • Aha, so the they're taking one monolithic source program, identifying what can be done in software, what standard components are required in order for the code to function properly, and what non-standard stuff has to be custom designed... and then designing it. ++nontrivial. Very cool tech.
            • While the website of the company does claim that they can also use this technology for FPGA-based designs, their big claim is that they are going from unmodified ANSI C to GDS-II in 8-16 weeks. GDS-II is a file format that specifies the physical implementation of the design and is used in microelectronic foundry flows. GDS-II files are not used in FPGA design flows, although they do have a much more highly abstracted analogue. However it is possible that they are using a reconfigurable fabric that they h

          • "Isn't a system-on-a-chip just CPU+GPU+soundset+RAM+flash on one chip? Is there any real hardware to implement"
            Which CPU, Which GPU? How much pipelining is needed? Where is the best place in the FPU or in the individual multiplier to add a register stage to improve timing? Should you upsize the gate to improve timing, or duplicate the flop and logic to reduce loading? Is your problem caused by too large a gate and you need to shrink it?
            What bus architecture will you use? How do you patch between bus standar

            • Are you designing the individual internals of the SoC components here, such as the CPU? How much pipelining is needed - that's something determined during CPU design, is it not? Same thing on where in the FPU to add a register stage to improve timing. But even if you are just gluing the IP blocks together, the design of the glue logic is itself considerable, given that it has to work within the limitations of the CPU, the memory and all other sub blocks.
          • An SoC to me sounds like a CPU + memory + I/O + whatever glue logic is needed for each of them to talk to each other. The glue logic's part of it would be converting the CPU outputs into inputs for the memory, I/O etc and the memory or I/O outputs into CPU inputs. That's the sort of thing that's achieved using combinational and sequential logic, or in other words, FPGAs. With FPGAs being as sophisticated as they are these days and even including CPU cores within them, it would seem that an FPGA could be
        • Isn't Verilog itself an HDL that's identical in syntax to C? C is not an HDL, so if someone writes a C program and it ultimately ends up as a netlist, how exactly is it different from Verilog? Just that it's not using Verilog's own simulation engines in order to build and run?

          I agree w/ the above - C would have to have some sort of error checking, as well as simulations of both static and behavioral models in order to be an HDL, and then the question would invariably arise - in what way would it be supe

    • by Darinbob ( 1142669 ) on Tuesday April 24, 2012 @01:06AM (#39779397)

      Now the snag is trying to find any of these twenty something coders who know C.

  • SystemC (Score:5, Informative)

    by paugq ( 443696 ) <pgquilesNO@SPAMelpauer.org> on Monday April 23, 2012 @05:24PM (#39776729) Homepage

    Why not? There is SystemC [wikipedia.org], a dialect of C++ which can be implemented in hardware (FPGA, for instance). What Algotochip is claiming is just one little more step forward.

    • Re:SystemC (Score:5, Informative)

      by JoelDB ( 2033044 ) on Monday April 23, 2012 @05:52PM (#39776947)
      While SystemC does have a synthesizable subset, it's mainly used for simulations at a high level from what I've seen. Going from synthesizable SystemC to hardware is an order of magnitude easier than going from a complex language such as C++ or C down to hardware, which is what this company is claiming. From reading the article I believe Tensilica [tensilica.com] is using a very similar approach with ASIPs) for bringing high-level lanaguages to hardware, and they are much more established in this field. One of the up-and-comers is AutoESL [xilinx.com] which was recently acquired by Xilinx. I've played around with this tool and its ability to bring C down to hardware is very impressive.
      • Re:SystemC (Score:5, Informative)

        by jd ( 1658 ) <`imipak' `at' `yahoo.com'> on Monday April 23, 2012 @07:34PM (#39777853) Homepage Journal

        Presumably, though, you could use a source-to-source compiler to convert C (with certain restrictions) into SystemC.* From there, you could do source-to-source compilation to convert SystemC into Verilog or whatever. You'd end up with crappy hardware, but the claim says nothing about design quality only design capability.

        *The obvious restriction is that you can't translate something for which no translation exists, whether that's a function call or a particular class of solution.

        Going directly from C to hardware without intermediate steps would indeed be a lot harder. But again that's not what the startup promises. They only promise that they can convert C to hardware, they say nothing about how many steps it takes on their end, only what it seems like from your end.

        Having said that, a direct C to hardware compiler is obviously possible. A CPU plus software is just emulating a pure hardware system with the code directly built into the design. Instead of replacing bits of circuitry, you replace the instructions which say what circuitry is to be emulated. Since an OS is just another emulator, this time of a particular computer architecture, there is nothing to stop you from taking a dedicated embedded computer, compiling the software, OS and CPU architecture, and getting a single chip that performs the same task(s) entirely in hardware -- no "processor" per-se at all, a true System on a Chip. Albeit rather more complex than most SoC designs currently going, but hey. There's no fun in the easy.

        Although there are uses for direct-to-hardware compilers, direct-to-FPGA for pure C would seem better. Take hard drives as an example. You can already install firmware, so there's programmable logic there. What if you could upload the Linux VFS plus applicable filesystems as well? You would reduce CPU load at the very least. If the drive also supported DMA rather than relying on the CPU to pull-and-forward, you could reduce bus activity as well. That would benefit a lot of people and be worth a lot of money for the manufacturer.

        This, though, is not worth nearly as much. New hardware isn't designed that often and the number of people designing it is very limited. Faster conversion times won't impact customers, so won't be a selling point to them, so there's no profit involved. Further, optimizing is still a black art, optimizing C compiled into a hardware description language is simply not going to be as good as hand-coding -- for a long time. Eventually, it'll be comparable, just as C compilers are getting close to hand-turned assembly, but it took 30-odd years to get there. True, cheaper engineers can be used, but cheaper doesn't mean better. The issues in hardware are not simply issues of logic and corporations who try to cut corners via C-to-hardware will put their customers through worlds of hurt for at least the next decade to decade and a half.

    • Re:SystemC (Score:4, Informative)

      by wiredlogic ( 135348 ) on Monday April 23, 2012 @06:02PM (#39777027)

      SystemC is a C++ library and simulation kernel. It isn't a dedicated language. The synthesizable subset of SystemC is very limited. Because it's plain C++, you have to implement all low level logic with much more code overhead than the equivalent VHDL or Verilog.

    • Last I heard from people who use it on a daily basis, SystemC's synthesis tools weren't really mature enough to use seriously. It IS great for building a model to test your simulations from a purpose-built HDL against though.
    • Just because you can use the language to write synthesizable code does not mean all code is synthesizable.
      An easy example is a re-entrant function that in software would be repeatedly called to solve the input. Assuming for different inputs you call it a different number of times a translation into pure hardware would require an arbitrary sized piece of hardware. Now you could set a limit on the range of values the hardware can solve for, but it's a tricky problem.
      Likewise software would call new() and get

  • by Weaselmancer ( 533834 ) on Monday April 23, 2012 @05:29PM (#39776763)

    C code to SoC. [wikipedia.org]

    So, how is this offering from India any different? I could do it in less than 8 to 16 weeks if the customer supplies me the C code to be converted. As in, download/purchase any one of these utilities, run the customer's file through it, and mail it back to them.

    Pretty simple.

  • A better question (Score:5, Insightful)

    by wonkey_monkey ( 2592601 ) on Monday April 23, 2012 @05:30PM (#39776775) Homepage

    Or can we expect our ANSI C code to be automagically implemented in a SoC in such a short time?

    How about you tell us what SoC stands for first? Once again, editors, we don't all know everything about everything in the tech world. Some of us come here to learn new things, and you guys don't make it easy. TFS should at least leave me with an impression of whether or not I need to read the TFA.

    • Re: (Score:2, Informative)

      by Anonymous Coward

      system on a chip

    • Re:A better question (Score:5, Informative)

      by khellendros1984 ( 792761 ) on Monday April 23, 2012 @05:40PM (#39776857) Journal
      That would be "System on a Chip", a term which describes a complete system included on a single chip. An example I've seen used more often would be a phone's central chip; they tend to integrate the CPU, GPU, wireless chipsets, and part or all of the RAM on one chip. In this case, it looks like they're advertising the ability to quickly create a hardware chip that functions the same as an arbitrary chunk of C code; essentially, you can make a hardware chip that implements a specific algorithm.
    • by Anonymous Coward on Monday April 23, 2012 @06:24PM (#39777223)

      Yeah! And what does 'C' stand for?

    • by Belial6 ( 794905 )
      Perhaps you could explain what TFS and TFA mean every time you use them so that the editors understand what your asking for....
    • https://www.google.com/search?q=soc [google.com]

      I believe you are looking for result #2.

    • Or can we expect our ANSI C code to be automagically implemented in a SoC in such a short time?

      How about you tell us what SoC stands for first?

      http://lmgtfy.com/?q=SoC [lmgtfy.com]

      Slashdot, where searching for an abreviation's meaning has become the ultimate technical challenge.

      Once again, editors, we don't all know everything about everything in the tech world.

      News for nerds?. Ain't that supposed to mean something?

      Some of us come here to learn new things

      Bro, two words: Google and wikipedia. And one more word: 2012. You should consider a career/interest change if you don't grasp the meaning conveyed by these three words.

      and you guys don't make it easy.

      Not to be mean, but if you want easy, there is always hamburger flipping (which I did when I was in college) or pants folding at the GAP.

      TFS should at least leave me with an impression of whether or not I need to read the TFA.

      But you can make that

    • by Jeng ( 926980 )

      If you can't understand TFS then read the TFA, if you still don't understand it then it should at least leave you with enough information to look up the portions you do not know.

      You learn more looking it up and finding out what you do not know rather than having someone spoon feed you the information.

      So basically if the submitter or editor had included the information then you would actually end up learning less.

  • Marvellous! (Score:5, Interesting)

    by Anonymous Coward on Monday April 23, 2012 @05:31PM (#39776787)

    I'm not entirely clear on how it works though. If I give them this:

    #include <stdio.h>
    int main() {
    printf("Hello world!\n");
    }

    they will convert it into a custom integrated circuit chip with Hello World! silkscreened on the top of it or does the chip actually display "Hello World!" on whatever it is connected to?

    • by gtall ( 79522 )

      You can get chips with little neon signs on the top. So "Hello World" becomes a marquee marching across the top of the chip. In more sophisticated chips, it also synthesizes the words into speech.

  • by Chris Mattern ( 191822 ) on Monday April 23, 2012 @05:41PM (#39776861)

    That's good. You didn't define or even expand SoC, GDSII, or TSMC. That's bad. I'm guessing SoC is "System on Chip" but I have no idea what the other two are.

  • by erice ( 13380 ) on Monday April 23, 2012 @05:44PM (#39776893) Homepage

    Most SOC's do a lot more than a direct translation of the c coded alogrithm would suggest. I guess if you had a "wrapper" platform that was good enough for many applications you could streemline the process. My guess that this platform and the links to C synthesis is most of Algotochip's secret sauce.

    C synthesis itself can't handle most programs writen in C. Essentially you need to write Verilog in C in order to make it work. Any dynamic allocation of memory, whether directly or indirectly, is a problem. IO can not be expected to work.

    So it boils down to: If you C source is uncharacteristicly just right and your application fits a pre-defined mold then you can make it a chip real quick. ..as long as you don't ecounter any problems during place and route or timing closure...

  • The devil is in the details. It isn't a question as to whether a hardware device can be manufactured that runs your code, it is provably possible.

    The issue is how cost-efficient is the SoC. How power efficient. How does it perform, does it do any more parallelism than a CPU would do if you just fed it the compiled code.

    • by tomhath ( 637240 )
      Exactly. Is there really any benefit to burning the program into nanocode ROM over normal compiling into a RISC instruction set? In theory, maybe. Burroughs used to do this a few decades ago and gave up on the idea.
  • Algorithms only work well if they fit well with the hardware they're targeting. You have to make certain assumptions, but depending on what your algorithm is, you should know which things you really need to think about (memory, branching, process communication, disk, ...)

    Algorithms that get synthesised into hardware will only work well if they're written in such a way that lends itself to synthesis. There's going to be a huge heap of stuff that doesn't fit well, or doesn't work at all. Writing things like V

  • by hamster_nz ( 656572 ) on Monday April 23, 2012 @06:03PM (#39777031)

    Most of these technologies 'C' to hardware technologies are overhyped and under-deliver.

    * It is definitely not ANSI C. It might share some syntax elements but that is about it
    * C programmers do not make good hardware designers (C programmers will disagree, HDL programmers won't)
    * The algorithms used in software by software developers do not translate well into hardware
    * If you want "good" hardware developed, use hardware design tools.

    If you don't agree with me on these points, post how you would convert "short unsigned value" into ASCII in char digits[5] and I'll show you how to do the same if you were designing a chip...

    • I dunno... I am just a programming hack.

      But... given the underpowered nature of microcontrollers (and logic), I would either use a table of powers of ten, subtracting and counting, or a bcd table of powers of two, along with bcd add and adjust.

      I would probably go for the bcd approach; guarantees that the job is done in 16 "cycles".

      Is that what you were thinking?

  • Ease of design, power consumption and performance. Pick any two.

    It would be interesting to see how this compares with the work of competent designers with a/d and analog skillz.

  • The real question is how efficient it is.

  • Why not just put the code onto high speed flash that goes on the SoC? Seems a whole lot easier, and I'm not clear why their solution is better. Really, I must be missing something, I'm curious.

    • I don't know the details of this product, but in most cases 'C' to hardware tools are used to optimize the inner-most portion of critical loops,

      One way is to build either a custom CPU instruction - for example a programmable "bitshuffle" for use in crypto.

      Another is to build a custom on-chip peripheral where "my_code(arg1, arg2)" maps to "start arg1 in port X, store arg2 in port Y, read answer from port Z" and the custom logic transforms X and Y into Z. The ports might even have FIFOs allowing many operatio

  • We can finally get a hardware implementation of quake!
  • Those birds are going to be so much more angry now! We're doomed, I tell you -- DOOMED!

  • There are questions regarding performance, area-needs, etc. If all they do is compile the C-code, put it in ROM, supply RAM and a CPU to run it, the claim would be easy to fulfill, but the result would suck. Details do matter very much here. If they do not give detail, its best to assume the claim is overblown and what they can do is not nearly as good as some people would have it.

  • Hello guys i work with VHDL and C for a while (small company gotta do the SW and the HW) We are developing a product (embedded system) and some calculus were taking too much time doing it on software, so instead using a C program to do this calculus now we are using hardware. I developed a VHDL module responsible for doing FFT calculus that is an order of magnitude faster than the software equivalent plus it is real time by default (since it's hardware) Now we have a SoC with a FFT hardware attached to the

It's a naive, domestic operating system without any breeding, but I think you'll be amused by its presumption.

Working...