Startup Claims C-code To SoC In 8-16 Weeks 205
eldavojohn writes "Details are really thin, but the EE Times is reporting that Algotochip claims to be sitting on the 'Holy Grail' of SoC design. From the article: '"We can move your designs from algorithms to chips in as little as eight weeks," said Satish Padmanabhan CTO and founder of Algotochip, whose EDA tool directly implements digital chips from C-algorithms.' Padmanabhan is the designer of the first superscalar digital signal processor. His company, interestingly enough, claims to provide a service that consists of a 'suite of software tools that interprets a customers' C-code without their having any knowledge of Algotochip's proprietary technology and tools. The resultant GDSII design, from which an EDA system can produce the file that goes to TSMC, and all of its intellectual property is owned completely by the customer—with no licenses required from Algotochip.' This was presented at this year's Globalpress Electronics Summit. Too good to be true? Or can we expect our ANSI C code to be automagically implemented in a SoC in such a short time?"
"Too good to be true?" (Score:4, Insightful)
"Too good to be true?"
Perhaps not, if you don't mind patent-encumbered chips with the occasional bug in them.
Re:"Too good to be true?" (Score:5, Funny)
Re:"Too good to be true?" (Score:5, Informative)
While this sounds nice, it's not the first "C to Silicon" program out there (Cadence beat them to it). And it certainly won't be the last. The thing is, the reason to use VHDL and to some extent Verilog is to minimize the occurrence of errors. Even when you verify your design bugs can still slip through. But due to the overall design of the language this is far less likely in VHDL.
Re: (Score:2)
Any mistake in a SoC is expensive, especially if you go directly from design to wafer without extensive testing. Most of the time it's 1 week of actual writing a description in VHDL or Verilog and then spending a few weeks/months verifying the design and removing any bug.
See, this is what's confusing me. Isn't a system-on-a-chip just CPU+GPU+soundset+RAM+flash on one chip? Is there any real hardware to implement? The whole summary makes it sound like they've implemented... a C compiler.....
Re: (Score:2)
SoCs these days include a fair amount of FPGAs. So it is more likely they are just producing FPGA code. Xilinix has FPGAs with ARM chips on them, or you can use soft PPC processors, not sure if they have soft ARM processors yet but they are likely not far in the future if they do not have them already.
Right now, you could produce FPGAs using soft processors to implement C code using Xilinix tools. However, like the gp says, that ain't all there is to it.
Re: (Score:2)
Re: (Score:3)
While the website of the company does claim that they can also use this technology for FPGA-based designs, their big claim is that they are going from unmodified ANSI C to GDS-II in 8-16 weeks. GDS-II is a file format that specifies the physical implementation of the design and is used in microelectronic foundry flows. GDS-II files are not used in FPGA design flows, although they do have a much more highly abstracted analogue. However it is possible that they are using a reconfigurable fabric that they h
Re: (Score:3)
"Isn't a system-on-a-chip just CPU+GPU+soundset+RAM+flash on one chip? Is there any real hardware to implement"
Which CPU, Which GPU? How much pipelining is needed? Where is the best place in the FPU or in the individual multiplier to add a register stage to improve timing? Should you upsize the gate to improve timing, or duplicate the flop and logic to reduce loading? Is your problem caused by too large a gate and you need to shrink it?
What bus architecture will you use? How do you patch between bus standar
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Isn't Verilog itself an HDL that's identical in syntax to C? C is not an HDL, so if someone writes a C program and it ultimately ends up as a netlist, how exactly is it different from Verilog? Just that it's not using Verilog's own simulation engines in order to build and run?
I agree w/ the above - C would have to have some sort of error checking, as well as simulations of both static and behavioral models in order to be an HDL, and then the question would invariably arise - in what way would it be supe
Re:"Too good to be true?" (Score:4, Insightful)
Now the snag is trying to find any of these twenty something coders who know C.
Re: (Score:2, Interesting)
What if .. (Score:2)
What if Microsoft decides to compile their new Windows 8 into a SoC ?
Re: (Score:3, Funny)
Then you could have a BSOD in hardware. Or in windows 8, Multiple Coloured Squares of Death.
Re: (Score:2)
I'd love to see them malloc a new piece of hardware.
Re: (Score:2)
Re: (Score:2)
step one: compile gnu code licensed under gpl3
step two: watch the lawsuits ensue while gnu demands the blueprint.
or
order a chip of qemu and get your own reverse engineered set of possessors see how many companies sue their asses off.
Re: (Score:2)
step one: compile gnu code licensed under gpl3
step two: watch the lawsuits ensue while gnu demands the blueprint.
Nah, I the blueprints would be more akin to the intermediate assembly that gets created when compiling somethin.g
Quine anyone ? (Score:2)
char*f="char*f=%c%s%c;main()
{printf(f,34,f,34,10);}%c";
main(){printf(f,34,f,34,10);}
Re: (Score:2)
Re: (Score:2, Funny)
It is no longer true. Now, you have to pick one.
Re: (Score:2)
Re: (Score:2)
Buggered if I know, the poster didn't explain the acronyms. WTF is SoC?
http://lmgtfy.com/?q=SoC [lmgtfy.com]
Slashdot, where searching for an acronym's meaning is the next technical challenge.
SystemC (Score:5, Informative)
Why not? There is SystemC [wikipedia.org], a dialect of C++ which can be implemented in hardware (FPGA, for instance). What Algotochip is claiming is just one little more step forward.
Re:SystemC (Score:5, Informative)
Re:SystemC (Score:5, Informative)
Presumably, though, you could use a source-to-source compiler to convert C (with certain restrictions) into SystemC.* From there, you could do source-to-source compilation to convert SystemC into Verilog or whatever. You'd end up with crappy hardware, but the claim says nothing about design quality only design capability.
*The obvious restriction is that you can't translate something for which no translation exists, whether that's a function call or a particular class of solution.
Going directly from C to hardware without intermediate steps would indeed be a lot harder. But again that's not what the startup promises. They only promise that they can convert C to hardware, they say nothing about how many steps it takes on their end, only what it seems like from your end.
Having said that, a direct C to hardware compiler is obviously possible. A CPU plus software is just emulating a pure hardware system with the code directly built into the design. Instead of replacing bits of circuitry, you replace the instructions which say what circuitry is to be emulated. Since an OS is just another emulator, this time of a particular computer architecture, there is nothing to stop you from taking a dedicated embedded computer, compiling the software, OS and CPU architecture, and getting a single chip that performs the same task(s) entirely in hardware -- no "processor" per-se at all, a true System on a Chip. Albeit rather more complex than most SoC designs currently going, but hey. There's no fun in the easy.
Although there are uses for direct-to-hardware compilers, direct-to-FPGA for pure C would seem better. Take hard drives as an example. You can already install firmware, so there's programmable logic there. What if you could upload the Linux VFS plus applicable filesystems as well? You would reduce CPU load at the very least. If the drive also supported DMA rather than relying on the CPU to pull-and-forward, you could reduce bus activity as well. That would benefit a lot of people and be worth a lot of money for the manufacturer.
This, though, is not worth nearly as much. New hardware isn't designed that often and the number of people designing it is very limited. Faster conversion times won't impact customers, so won't be a selling point to them, so there's no profit involved. Further, optimizing is still a black art, optimizing C compiled into a hardware description language is simply not going to be as good as hand-coding -- for a long time. Eventually, it'll be comparable, just as C compilers are getting close to hand-turned assembly, but it took 30-odd years to get there. True, cheaper engineers can be used, but cheaper doesn't mean better. The issues in hardware are not simply issues of logic and corporations who try to cut corners via C-to-hardware will put their customers through worlds of hurt for at least the next decade to decade and a half.
Re: (Score:2)
Re:SystemC (Score:4, Informative)
SystemC is a C++ library and simulation kernel. It isn't a dedicated language. The synthesizable subset of SystemC is very limited. Because it's plain C++, you have to implement all low level logic with much more code overhead than the equivalent VHDL or Verilog.
Re: (Score:3)
Re: (Score:2)
Just because you can use the language to write synthesizable code does not mean all code is synthesizable.
An easy example is a re-entrant function that in software would be repeatedly called to solve the input. Assuming for different inputs you call it a different number of times a translation into pure hardware would require an arbitrary sized piece of hardware. Now you could set a limit on the range of values the hardware can solve for, but it's a tricky problem.
Likewise software would call new() and get
Re: (Score:2)
You don't just #include and magically get shit that can go onto silicon
I never said so. The fact that is a dialect of C++, not pure C++, speaks volumes already.
This is nothing new at all (Score:5, Interesting)
C code to SoC. [wikipedia.org]
So, how is this offering from India any different? I could do it in less than 8 to 16 weeks if the customer supplies me the C code to be converted. As in, download/purchase any one of these utilities, run the customer's file through it, and mail it back to them.
Pretty simple.
Re: (Score:2)
A better question (Score:5, Insightful)
Or can we expect our ANSI C code to be automagically implemented in a SoC in such a short time?
How about you tell us what SoC stands for first? Once again, editors, we don't all know everything about everything in the tech world. Some of us come here to learn new things, and you guys don't make it easy. TFS should at least leave me with an impression of whether or not I need to read the TFA.
Re: (Score:2, Informative)
system on a chip
Re:A better question (Score:5, Informative)
Re:A better question (Score:5, Funny)
Yeah! And what does 'C' stand for?
Re: (Score:3)
Yeah! And what does 'C' stand for?
Just in case you're not trolling:
C is a high-level programming language (yes I know I could give a better description but my brain is fried this late in the day).
http://en.wikipedia.org/wiki/C_(programming_language) [wikipedia.org]
Re: (Score:2)
Re: (Score:2)
What does "/." stand for?
It's a redundant alias for the system root directory. Assuming you're using a proper computer and not one of these modern toys.
\me ducks
Re: (Score:2)
Re: (Score:2)
https://www.google.com/search?q=soc [google.com]
I believe you are looking for result #2.
learning != being spoon-fed (Score:3)
Or can we expect our ANSI C code to be automagically implemented in a SoC in such a short time?
How about you tell us what SoC stands for first?
http://lmgtfy.com/?q=SoC [lmgtfy.com]
Slashdot, where searching for an abreviation's meaning has become the ultimate technical challenge.
Once again, editors, we don't all know everything about everything in the tech world.
News for nerds?. Ain't that supposed to mean something?
Some of us come here to learn new things
Bro, two words: Google and wikipedia. And one more word: 2012. You should consider a career/interest change if you don't grasp the meaning conveyed by these three words.
and you guys don't make it easy.
Not to be mean, but if you want easy, there is always hamburger flipping (which I did when I was in college) or pants folding at the GAP.
TFS should at least leave me with an impression of whether or not I need to read the TFA.
But you can make that
Re: (Score:2)
If you can't understand TFS then read the TFA, if you still don't understand it then it should at least leave you with enough information to look up the portions you do not know.
You learn more looking it up and finding out what you do not know rather than having someone spoon feed you the information.
So basically if the submitter or editor had included the information then you would actually end up learning less.
Re:A better question (Score:4, Insightful)
The point is, you shouldn't have to freaking google to find out what the heck an article is about. The brain-dead submitter, or brain-dead 'editor' should be clarifying anything that isn't very common everyday tech lingo/acronyms.
Re: (Score:3, Insightful)
Re: (Score:2)
The point is that one person writes the summary and a huge number of people read it. Not all of them know everything.
Exactly. Or to put it in a way any geek should understand...
Your "source code" (= the summary) refers to an external source (= the definition of the abbreviation SoC). You could use the preprocessor to fetch that and insert it into the code as a string literal before running compilation (slashdot submission). As you didn't, this meant that the object code generated (= the slashdot summary) invoked a remote file access operation where the host operating system (= the slashdot reader's brain) didn't curren
dilettante (Score:2)
With 14 different meanings in science and technology alone, according to Wikipedia [wikipedia.org]. The point is that one person writes the summary and a huge number of people read it. Not all of them know everything. Just type few extra characters ("System on a Chip" instead of "SoC" the first time you use it) and by spending a few seconds you save others what probably adds up to quite a significant amount of time spent on figuring out what it means. That doesn't directly benefit the writer of the summary, of course, but there needs to be just one other person taking the trouble to type those few extra characters for a term the writer of this summary is not familiar with to more than compensate for the extra time spent.
It's efficient to be clear in what you write and not assume that everyone knows every acronym. What baffles me is how many geeks don't understand such a simple concept.
What baffles me is how many geeks can't do basic research. We are talking about software translation for Christ' sake. It is evident then that translation is either to another language, or compilation towards a specific platform. That right there gives you the context with which to narrow your search.
What is worse is seeing many geeks in /. who don't know what SoC means. What's next? You need an explanation of what CPU means as well? GPU? RAM? ALU? LED? IO?
Not that I really put any credence to geek stre
Re:A better question (Score:5, Funny)
Yes, he shouldn't need to Google since he should know what a SoC is since this is supposed to be a site for technological literate people not reddit rejects.
Indeed.
"SoC" is short for "State of Charge," which is, basically, the status of a battery.
I'm not sure what this has to do with C-code. Maybe these chips they're talking about are used to make battery controllers that use SoC monitoring.
Re:A better question (Score:5, Funny)
Salsa on Crotch
Re: (Score:2)
Ah how I miss having some points. Real funny shit.
Re: (Score:2)
http://www.tumblr.com/tagged/i'm-gonna-put-salsa-on-my-crotch [tumblr.com]
Re: (Score:2)
People who work in the 'chip' business are familiar with SoC, or Salsa on Crotch.
Re:A better question (Score:5, Informative)
SoC [wikipedia.org] has been emerging as a more common term in the last 5 or 6 years meaning System on a Chip. The advantages are it uses less power to do more things, and a lot of low level functions (radios, gpu rendering, etc) have more direct access to on-board cache and memory, as well as a direct line to RAM. They're used in just about everything and are essentially equivalent to saying CPU (for anything other than a desktop or laptop w/o IGP), these days.
acronymification [Re:A better question] (Score:4, Funny)
Not sure if serious.
according to the moderation, "5, funny."
SoC has been emerging as a more common term in the last 5 or 6 years meaning System on a Chip.
Don't be silly. That would be SoaC. Clearly, if you acronymize the "on", you have to acronymify the "a" as well. The acronominalization standards demand it. Why, if you abandon all rules for acromynificationizing, there would be chaos!
Re: (Score:2)
Re: (Score:2)
Yes, he shouldn't need to Google since he should know what a SoC is since this is supposed to be a site for technological literate people not reddit rejects.
Indeed.
"SoC" is short for "State of Charge," which is, basically, the status of a battery.
I'm not sure what this has to do with C-code. Maybe these chips they're talking about are used to make battery controllers that use SoC monitoring.
You might have a point if that were the sole entry in your google results page (alongside salsa on crotch or standard occupation classification).
But, by golly, that is not the case. Guess f* what? Systems on a Chip is right at the f* top of a google page result, with two sentences describing what it means. By golly, wouldn't that be enough to guide the supposedly nerd mind towards?
Oh, let me guess, if you don't find it on your first hit, right on the top with flashing colors and dancing monkeys on anima
Re:A better question (Score:4, Insightful)
Do we need to start having a basic competency test before letting idiots like this post? Jesus fuck, you newtards are idiots. No wonder CmdrTaco left...
That's hugely unfair. I figured out what it was based on the context. Hmmmm... SoC.. moving algorithms to chips... might it be System-On-Chip?
However, there are plenty of articles here about some pretty heavy physics, particle physics, medical advancements, etc. that are well outside of my own field. It would be nice to have some quality journalism where a term or concept is explained in the summary.
It's not that hard. Another sentence at most. I don't have a problem searching for terms and concepts I don't fully grasp, but it would be nice to have some quality journalism again. Seriously.... grammar and spelling mistakes everywhere now, even at mainstream outlets like CNN. Just once I would like the impression that somebody with an English major was doing actual editing.
Re: (Score:2)
Marvellous! (Score:5, Interesting)
I'm not entirely clear on how it works though. If I give them this:
#include <stdio.h>
int main() {
printf("Hello world!\n");
}
they will convert it into a custom integrated circuit chip with Hello World! silkscreened on the top of it or does the chip actually display "Hello World!" on whatever it is connected to?
Re: (Score:2)
You can get chips with little neon signs on the top. So "Hello World" becomes a marquee marching across the top of the chip. In more sophisticated chips, it also synthesizes the words into speech.
Re: (Score:3, Funny)
The press release says the user doesn't need to know anything about how their tool works. So obviously it will infer the appropriate solution and implement that too.
Actually the printf example is one of the easiest to implement. You'll receive a sheet of paper with "Hello World" printed on it in 6-8 weeks.
Re: (Score:2)
I'm not entirely clear on how it works though. If I give them this:
#include <stdio.h> int main() { printf("Hello world!\n"); }
they will convert it into a custom integrated circuit chip with Hello World! silkscreened on the top of it or does the chip actually display "Hello World!" on whatever it is connected to?
It won't do anything. The above piece of code will not translate to synthesizable logic. The printf() statement is not synthesizable. For the tool to output something meaningful, the input has to be meaningful (to the tool).
But we're not just talking about FPGA imaging here -- we're talking system-on-a-chip with processor, RAM, Flash etc. So their system will generate a SoC from stock components with a processor, minimal Flash RAM and a very basic GPU. They won't need to generate any custom logic, but they'll still be generating a SoC to spec, one that will connect to some video display and show a single message.
Re: (Score:2)
"The printf() statement is not synthesizable."
Actually in a previous company we were experimenting with tools like this.
We got it to wrap printf statements up as a trigger to a machine to read the string from memory and pipe it to a 8 bit stream. This 8 bit stream them went to a serial port and voila your RTl would print to the serial port.
It even coped with concurrency, but if you hit error states one error code could swamp out the real error so it was far from perfect.
Very cool though.
You provdided a link to define EDA (Score:4, Informative)
That's good. You didn't define or even expand SoC, GDSII, or TSMC. That's bad. I'm guessing SoC is "System on Chip" but I have no idea what the other two are.
Re:You provdided a link to define EDA (Score:5, Informative)
GDSII or GDS-2 is a layout format, used by microsystems designers. It's a 2D-only format, but you can have unlimited layers.
TSMC (Taiwan Semiconductor Manufacturing Company) is the largest microsystems foundry in the world.
You are correct about SoC.
Re: (Score:2)
Yeah, on Myspace people define TMSC, UMC, GDSII, SoC, and EDA. All the time.
Depends on the source code and what the chip needs (Score:4, Insightful)
Most SOC's do a lot more than a direct translation of the c coded alogrithm would suggest. I guess if you had a "wrapper" platform that was good enough for many applications you could streemline the process. My guess that this platform and the links to C synthesis is most of Algotochip's secret sauce.
C synthesis itself can't handle most programs writen in C. Essentially you need to write Verilog in C in order to make it work. Any dynamic allocation of memory, whether directly or indirectly, is a problem. IO can not be expected to work.
So it boils down to: If you C source is uncharacteristicly just right and your application fits a pre-defined mold then you can make it a chip real quick. ..as long as you don't ecounter any problems during place and route or timing closure...
Re: (Score:3)
so it compiles, drops down a CPU core and a ROM (Score:2)
The devil is in the details. It isn't a question as to whether a hardware device can be manufactured that runs your code, it is provably possible.
The issue is how cost-efficient is the SoC. How power efficient. How does it perform, does it do any more parallelism than a CPU would do if you just fed it the compiled code.
Re: (Score:2)
Algorithms vs. hardware (Score:2)
Algorithms only work well if they fit well with the hardware they're targeting. You have to make certain assumptions, but depending on what your algorithm is, you should know which things you really need to think about (memory, branching, process communication, disk, ...)
Algorithms that get synthesised into hardware will only work well if they're written in such a way that lends itself to synthesis. There's going to be a huge heap of stuff that doesn't fit well, or doesn't work at all. Writing things like V
I hope not, but my money is on overhyped. (Score:5, Informative)
Most of these technologies 'C' to hardware technologies are overhyped and under-deliver.
* It is definitely not ANSI C. It might share some syntax elements but that is about it
* C programmers do not make good hardware designers (C programmers will disagree, HDL programmers won't)
* The algorithms used in software by software developers do not translate well into hardware
* If you want "good" hardware developed, use hardware design tools.
If you don't agree with me on these points, post how you would convert "short unsigned value" into ASCII in char digits[5] and I'll show you how to do the same if you were designing a chip...
Dude, don't leave us hanging... (Score:2)
I dunno... I am just a programming hack.
But... given the underpowered nature of microcontrollers (and logic), I would either use a table of powers of ten, subtracting and counting, or a bcd table of powers of two, along with bcd add and adjust.
I would probably go for the bcd approach; guarantees that the job is done in 16 "cycles".
Is that what you were thinking?
Re: (Score:2)
Got it, a combination of the two - subtract off highest powers of ten, but in bcd binary digits to fill in the lower bits of each display digit.
Good solution.
Re: (Score:2)
I doubt we've reached the point where there are so many excess gates lying around that you can use shitty C-to-HDL converters. There is a large excess of CPU cycles but not nearly as much of an excess of gates. You really have to be conscious of how your design will be synthesized because it's very easy for a C-to-HDL converter to really screw up implementation and do terrible things that will bloat the netlist. I've used such a converter before for a small piece of an FPGA program, and I ended up re-wri
Re:I hope not, but my money is on overhyped. (Score:5, Interesting)
for(i = 0; i != 5 ; i++)
{
digit[i] = '0';
if(value >= 80000) { value -= 80000; digit[i] |= 8; }
if(value >= 40000) { value -= 40000; digit[i] |= 4; }
if(value >= 20000) { value -= 20000; digit[i] |= 2; }
if(value >= 10000) { value -= 10000; digit[i] |= 1; }
value = value*8 + value*2;
}
Advantages:
* No divide/mod operator
* Extracts digits from most significant to least significant (if you want to stream out the digits)
* Can be unrolled or pipelined to meet timing / throughput requirements
Sorry about any syntax/typos/errors in the code... it is a comment!
Re: (Score:2)
Re:I hope not, but my money is on overhyped. (Score:4, Informative)
Looks like you failed to spot the character constant in digit[i] = '0'; - it is already a character....
Re:I hope not, but my money is on overhyped. (Score:4, Insightful)
I'm curious, though... how would you convert unsigned to ASCII on chip?
I think OP's point is that your average C programmer would just start doing all kinds of dividing; most of the time there is very little hardware support for division, and so if you fed this into a C->HDL converter it would generate massive bloat as it imported some special library to handle division.
My first brute-force guess would involve a state machine (FSM), a comparator (16-bit), two adders (one 4-bit, one 16 bit), two muxes (16-bit and 4-bit, four input), a 16-bit register with clock enable and an associated input mux, and four 4-bit registers with clock enable. The FSM would control the 16-bit mux which selects a constant from four powers of 10 (10,000 to 10), and the output of the mux is connected to the 16-bit adder and the comparator. The other input is the 16-bit register, which also needs a mux for selecting between the argument and the adder's output. This register output is also a comparator input. The comparator is configured for "less than" and its output goes to the FSM so it can make decisions. The FSM also controls a 4-bit wide mux which connects four 4-bit registers that represent the various 10s digits (10,000 to 10) to an adder with the other input set to "1".
1) If the number is greater than 10,000 then inc the "ten-thousands" digit, subtract 10,000 from the argument, and repeat this step. ...
2) Once it is less than 10,000 then the state machine would walk forward to the thousands digit
3) If the number is greater than 1000, inc the thousands digit, subtract 1000 from the argument, and repeat this step.
4) Once it is less than 1000... (you can extrapolate some here)
n) Once the tens digit has been processed, the remaining argument is the ones digit
This would give you a series of 4-bit numbers. Once the FSM is done (it's important for it to finish first and change all bits simultaneously, so that downstream logic doesn't see glitches), it would append 0x3 to the front of each 4-bit number, turning them into ASCII.
Note that this approach requires very little in terms of hardware resources, at the expense of requiring a variable amount of time to process its inputs. Consider that 00000 would take 6 clock cycles to produce (need a cycle to load the input), while 29,999 would require like 33 clock cycles (no need to do subtractions on the ones digit)
There are other approaches that may be faster in exchange for requiring more hardware. Consider if you had 9 comparators, one for each digit (except 0), and an adder with a 9-input mux; every input would require 6 clock cycles. But this took an extra 8 comparators (and a significantly bigger mux too); size for speed (interestingly, the divider still only gets you 6 clock cycles, and probably takes up many more resources than 9 comparators. But if you could find other work for the divider then time-sharing might make it worth your while, maybe). You could even go all the way and use 32,000+ comparators, if fan-out wouldn't spell doom for such an approach, and then you could always calculate every possible value in 1 clock cycle...but this would require MASSIVE resources. Now if you only needed, say, from 0 to 1000, that might be slightly less unreasonable (perhaps within fanout limitations but probably still unreasonably large).
OPs point is that a good hardware engineer knows about these tradeoffs and handles them appropriately, while a C programmer isn't trained to think about these issues and their language doesn't even naturally express the structures that it will be mapped on to. Writing the kind of C code that you need to properly synthesize what you want feels like saying the alphabet backwards while jumping up and down on one foot while rubbing your belly and patting your head. And that's if you can even figure out how to tell the C synther that since your values only go from 0 to 1000 that it doesn't need all 16-bits of that unsigned short and it could really get away with only 10 bit support.
Re: (Score:2)
Google just found me cunning way to implement binary to BCD conversion that works by using modified shift registers [www.jjmk.dk].
Very slick, Wouldn't be found by a 'C' to hardware process or 'C' programmer.
Re: (Score:2)
But if you need to fairly complex things like convert strings, aren't you just better off sticking a CPU on whatever you're designing?
Re:I hope not, but my money is on overhyped. (Score:4, Interesting)
We all know that it is stupid, but one the "next big thing" ideas for FPGA technology will be using them for ultra-low latency high frequency share trading.
The idea being that if you can bypass switches, routers, NICs, buffers, IRQs, CPU contet switches and so on you will be able to issue your trade requests before the whole data packet has finished coming off the wire, allowing you to get a big jump on your competitors.
One assumes that the "buy, buy, buy" or "sell, sell, sell" packets will need to be generated in the finial formats needed by the market, which will most probably need something to be converted from bInary to ASCII characters.
High frequency traders dream that it would be possible to turn a trade around within a few nano-seconds of the market data arriving.
Re: (Score:2)
Well yeah, if you have a CPU lying around on your PCB, and if it has division hardware, then it's easier to use the CPU for a task like that. But another part of designing hardware is knowing how to partition the design space such that you can make the most effective use of computing resources that are available.
The other side of the coin is that the algorithm I outlined above could reasonably compute a 5-digit binary-to-BCD conversion in six clock cycles. Your average embedded CPU would still be handling
Design automation tradeoffs (Score:2)
Ease of design, power consumption and performance. Pick any two.
It would be interesting to see how this compares with the work of competent designers with a/d and analog skillz.
Translating C to hardware shouldn't be that hard (Score:2)
The real question is how efficient it is.
I have a great idea (Score:2)
Why not just put the code onto high speed flash that goes on the SoC? Seems a whole lot easier, and I'm not clear why their solution is better. Really, I must be missing something, I'm curious.
Re: (Score:2)
I don't know the details of this product, but in most cases 'C' to hardware tools are used to optimize the inner-most portion of critical loops,
One way is to build either a custom CPU instruction - for example a programmable "bitshuffle" for use in crypto.
Another is to build a custom on-chip peripheral where "my_code(arg1, arg2)" maps to "start arg1 in port X, store arg2 in port Y, read answer from port Z" and the custom logic transforms X and Y into Z. The ports might even have FIFOs allowing many operatio
Alright! (Score:2)
Hitchcock had it right! (Score:2)
Those birds are going to be so much more angry now! We're doomed, I tell you -- DOOMED!
How good is it actually? (Score:2)
There are questions regarding performance, area-needs, etc. If all they do is compile the C-code, put it in ROM, supply RAM and a CPU to run it, the claim would be easy to fulfill, but the result would suck. Details do matter very much here. If they do not give detail, its best to assume the claim is overblown and what they can do is not nearly as good as some people would have it.
Real World Example (Score:2)
Re:Linux on a chip! (Score:5, Funny)
It would be the size of a kernel
Re: (Score:2)
Popcorn?
Re: (Score:2)
How about you tell us what USofA stands for first? Once again, posters, we don't all know everything about everything in the world. Some of us come here to learn new things, and you guys don't make it easy. TFP should at least leave me with an impression of whether or not I need to read, uh, the rest of TFP.
Re: (Score:2)
They have zero reputation for creating junky hardware and code.
Really?
Re: (Score:2)
There are lots of tools that do this, for varying levels of success. The problem is, translating C to hardware is not an especially difficult challenge. Translating algorithms that make sense running on a general purpose CPU to algorithms that make sense implemented in hardware is a very hard problem. C is intrinsically serial, hardware is intrinsically parallel (in two dimensions).
The hard problem, and one that's the focus of a lot of research, is analysing C (or whatever) code and determining how to
Re: (Score:2)
Downmodded. How disingenuous of a site with so many programmers who know firsthand of the shit that comes out of India.
If I had any mod points I would mod you up as "insightful". Because I wasn't aware that "Sunnyvale, Calif." was in India. I thought it was in the US. Thanks for clearing that up.