Follow Slashdot stories on Twitter


Forgot your password?
Businesses Hardware IT

Transmeta Closing Up Shop 413

Ashutosh Lotlikar wrote to mention an article on the Business 2.0 site stating that chip producer Transmeta is going out of business. From the article: "The company's Crusoe family of microprocessors promised lower power consumption and heat generation, enabling the creation of laptops with longer battery life. Critics bashed the chips for being underpowered compared with Intel's latest and greatest. Transmeta struggled to find a market, and recently it sold off most of its chipmaking business for $15 million to Culturecom Holdings, a Hong Kong company better known for publishing comic books."
This discussion has been archived. No new comments can be posted.

Transmeta Closing Up Shop

Comments Filter:
  • by imsabbel ( 611519 ) on Sunday June 05, 2005 @08:58PM (#12732122)
    The whole architecture was build upon the premise that the core is only accessable via the code morphing software, so the different crusoe chips hadnt even binary compatible cores.
  • by bersl2 ( 689221 ) on Sunday June 05, 2005 @09:12PM (#12732178) Journal
    The other day I was fiddling with a laptop that had dual 2GHz processors or something like that. Ehh? I mean, it's great that they can cram all that into a "moderately" small package, but still, you need Nomex pants to use it in your lap.

    That's not a laptop; that's a portable workstation.
  • by ebuck ( 585470 ) on Sunday June 05, 2005 @09:30PM (#12732296)
    Transmeta has enough cash to sustain itself for at least a year. I doubt that they will just sit around and watch it disappear.

    The headline was irresponsible. It implied that Transmeta was shutting down today. A lot of good and bad things can happen in a year, but that's future stuff, and as such is undecided.

    Transmeta can restructure, find VC funding, be bought up by another company, license it's technology to a deep pocketed partner, release a new product and watch it take off (or fail), perform massive layoffs, cutbacks, etc. Headlining that they are closing fails to take into account the money they have and the time they have.
  • by PapaZit ( 33585 ) on Sunday June 05, 2005 @09:43PM (#12732365)
    "Where can I buy one" was what I thought when I first heard about Transmeta's processors.

    I don't need a laptop. I want to put one into a PC. VIA makes a similar sort of low-power product, and you can actually play with those.

    Transmeta made some inroads into the laptop and supercomputer markets, but there was just no way for normal people to play with one, except by buying a laptop.
  • Re:instruction set (Score:5, Insightful)

    by tftp ( 111690 ) on Sunday June 05, 2005 @10:05PM (#12732465) Homepage
    Coding wouldn't have been expensive unless you were selling the software

    Coding would be infinitely expensive if you pour money in and gain nothing, one way or another. Selling of the s/w is just one gain option; using it in-house, as you suggest, is another.

    However you can't buy a Transmeta beige box and give it to a code monkey to play with :-) There are no such boxes, except a few notebooks that don't even exist (for all practical purposes.) You would have to build your own computer, from chips, caps and resistors. That is not easy (read it as "awfully expensive".)

    You also mention number-crunching in this post and below. But if you want that you don't go with a teeny-weeny low power CPU. You take a big and hot chip, and not one either. Big CPUs can run SMP if that's your thing; for example, G4 is not even a "big" CPU in my book, but with its existing SMP capabilities and its AltiVec core (which is probably what you need for your multimedia and other uses) it trumps Transmeta's product, just stomps it into the ground. And you can get G4 beige boxen from many places, off the shelf (including Apple's shelf, for the moment.)

    Transmeta's CPUs are good for one purpose only - for emulating other CPUs. If you want a cold chip, there are many other, and better too (ask anyone between Atmel and Freescale.) If you want a fast CPU, there are many of those (ask AMD and Intel and IBM.) You'd have to work hard to find the exact niche where Transmeta's products fit - and the problem is that the niche is too narrow for the company to live in.

  • by jd ( 1658 ) <> on Sunday June 05, 2005 @11:09PM (#12732784) Homepage Journal
    Motorola "spun off" (ie: ditched) their chip-making business. Inmos - owned by a music chain, Thorn EMI - was sold to ST and their technology was dumped. IIT, a co-processor manufacturer in the days of the 8086 to 80286 died a death. Cyrix was bought, as mentioned.

    This is a field where you must not only have a good product, you must also have a solid market AND a solid marketing team, AND you must avoid bad PR like the plague, AND any major players (like Intel) must not deliberately sabotage efforts to compete, AND your plant can't be struck by major earthquakes.

    (Why are all the major chip makers in Taiwan, Japan and America ALL concentrated in areas with high tectonic activity? Is there something in the fault line they use in the production line?)

    The bottom line is simple. A chip fabrication plant can cost tens of millions to hundreds of millions of dollars, skilled chip designers can command hefty salaries, many of the key markets are 0wn3d by monopolies of questionable legality who flirt with unethical practices to keep their position, and software developers reinforce this by targetting established, high-volume platforms and that means no new products get support.

    Of course, Transmeta didn't help its case. Its Linux distro was late, the first batch of chips was buggy, they didn't sell to anyone outside of the "big players" (and "big players" only really buy from other "big players", because volume bought and sold = profit), and they only produced an 80x86 layer for the Crusoe, rather than using the capabilities to cross market boundaries and therefore create volume by getting into many niche markets.

    Also, their design was poor. Intel beat them on power consumption in a very short space of time, and this is Intel we are talking about. At the same time, people knew there were problems with 80x86 scalability (hence the work on SMP and hyperthreading), but Transmeta didn't look far enough ahead to build a multicore product, when they were already building a design from scratch and had ample opportunity to make such changes.

    (In comparison, AMD and Intel have to engineer such features into an existing design, which is always much harder and likely to be much slower than working from first principles. AMD's and Intel's route also offers much better odds of bugs being found in the design, at a later date, as their architecture was never intended to be multicore.)

    So, I don't hold Transmeta blameless in this. They may have been pushed over the edge, but they still chose to walk along the cliff in the first place, knowing it to be a dangerous spot, and knowing that the view wasn't even that good there, to make it worth the risk.

    One of these days, I hope to see a company start up that takes the time to be truly innovative (and not just fake it), takes the time to get things right, and makes a product so damn unbeatable it wipes the floor with everything else.

    It does happen. True, AMD is no start-up, but they were hardly giants in the 80x86 world. With the Opteron and their 64/32-bit crossover architecture, they've demolished Intel's Itanium and even convinced Microsoft to switch to them for 64-bit stuff. Given the longevity of the Wintel duopoly, that took a good plan and a good effort.

    Any start-up could do just as well, or better, because it wouldn't have the legacy hardware to build around. They could do a clean design that merely supported legacy code. Transmeta started down that road, but for some reason chose only to camp a little way down it and go no further.

    The "ideal" processor would work just as well as a CPU, GPU, network processor or processor for a disk array, as then a manufacturer can go to a single vendor, buy in even bigger bulk, and save money on all aspects. Your computer would become a Beowulf cluster, in effect, with specialization in software. It would be cheaper to build, and would mean that the same system wou

  • by Doc Ruby ( 173196 ) on Sunday June 05, 2005 @11:13PM (#12732792) Homepage Journal
    Could it be that it's run by the guys who cut the original deals with China 30 years ago? Nixon's Republicans, like Rumsfeld and Cheney? How about that guy we call "Mr. President", whose dad (who we called Mr. President or Mr. Vice President for 12 years in the middle) was Nixon's first representative of America in China? BushCo, doing just swell floating atop the work of generations of Americans as it gets hocked in the worst economy since the 1930s Depression. Which, incidentally, was the stomping grounds for Prescott Bush, Bush Sr's father, the banker shut down for "trading with the enemy", funding Nazis with war bonds peddled to Americans, which came back at our troops in the field as slave-manufactured bullets and bombs.
  • Re:RTFA (Score:2, Insightful)

    by IHateSlashDot ( 823890 ) on Sunday June 05, 2005 @11:48PM (#12732946)
    Transmeta isn't going out of business


    It's quite possible, though apparently unlikely, that Transmeta will turn things around and manage to survive. However, Intel is already all over the leakage problem, so this may well be the end of Transmeta.

    This is the definition of "going out of business". They are not "out" of business. They are "going out" of business.
  • Re:instruction set (Score:3, Insightful)

    by tftp ( 111690 ) on Monday June 06, 2005 @12:07AM (#12733050) Homepage
    I have wondered if the Transmeta would be good for emulating things like the PDP-11, Vax, and other older Minis

    My answer to that would be NO. If the task is to run a legacy s/w on some sort of a replica box, I would rather synthesize the desired CPU in an FPGA. It would give me direct, hardware execution of commands as opposed to reinterpreting them. As another important benefit, I would synthesize right there all the I/O hardware that is part of that Mini. This is not possible with Transmeta since it's just a CPU, and it has no idea about PDP-11 bus, for example. You'd have to build the bus controller anyway, unless you want to do it in VLIW software - which is not practical.

  • by cburley ( 105664 ) on Monday June 06, 2005 @11:28PM (#12743206) Homepage Journal
    In the case of a VLIW machine, theoretically, it's a fast beast- but you have to have a good compiler

    I'm no longer convinced. I worked on the internals of a Fortran optimizing compiler for a VLIW machine -- nearly 20 years ago!! -- so I do have some understanding of the issues.

    Seems to me that we've had plenty of time to produce VLIW compilers of adequate quality. Any VLIW/EPIC-chip vendor would naturally try very hard to ensure all potential developers (including 3rd-party and FOSS developers) had easy, even free, access to such compiler. Otherwise, what's the point?

    Yet, VLIW just keeps failing to capture anything beyond a niche market. Why?

    I think it's because it really wins only for a relatively narrow range of chip technologies, die sizes, and application needs.

    Mainly, once you compile your code to a VLIW target, you've committed it to run efficiently on a very specific number of available registers, a particular narrow range of memory latencies, and so on.

    So if you run that same machine code on a newer, "bigger" CPU with more registers or faster (or even different-latency) memories, your highly optimized code is suddenly stuck running in a suboptimal fashion. Ditto if you run it on a lower-cost, lower-power machine that offers, say, half the registers and twice the memory latencies.

    Meanwhile, your I-cache gets stressed out because of all the long instructions needed to get so much less done. Sure, when you're in a predictably tight loop with few or no intra-iterative dependencies, the loop itself might take within 5x the number of bytes of code, compared to x86, in I-cache, and run a lot faster (at least on paper).

    But all the "scalar" code really blows up your I-cache, or so I assume. Whereas a CPU with a bit-efficient ISA, such as the x86, fits a lot more into the same I-cache, with the tradeoff that it might use a smaller I-cache in order to gain space for a microcode-like decoding of "hot spots" in the code it is running (e.g. loops), in which case that microcode is, obviously, fairly carefully tuned to suit that particular processor. (Yes, it's basically got the optimization phase of a compiler on the chip at that point, something VLIW theoretically doesn't need.)

    IMO, before VLIW/EPIC chips become winners, we'd have to see a fundamental leap in the ability of not just compilers, but operating systems, libraries, linkers/loaders, and so on, to accommodate truly dynamic, chip-specific generation of machine code from a predigested form of the original code.

    It's not unlike what would be needed to really take advantage of per-CPU knowledge of I-cache, D-cache, L2 cache, TLB, and other concerns, except much more complicated, so I'd try first to demonstrate that a complete OS could take advantage of today's CPUs, before assuming one could take sufficient advantage of VLIW/EPIC to justify rolling out a whole new architecture.

Perfection is acheived only on the point of collapse. - C. N. Parkinson