Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Transmeta Hardware

NVIDIAs 64-bit Tegra K1: The Ghost of Transmeta Rides Again, Out of Order 125

MojoKid (1002251) writes Ever since Nvidia unveiled its 64-bit Project Denver CPU at CES last year, there's been discussion over what the core might be and what kind of performance it would offer. Visibly, the chip is huge, more than 2x the size of the Cortex-A15 that powers the 32-bit version of Tegra K1. Now we know a bit more about the core, and it's like nothing you'd expect. It is, however, somewhat similar to the designs we've seen in the past from the vanished CPU manufacturer Transmeta. When it designed Project Denver, Nvidia chose to step away from the out-of-order execution engine that typifies virtually all high-end ARM and x86 processors. In an OoOE design, the CPU itself is responsible for deciding which code should be executed at any given cycle. OoOE chips tend to be much faster than their in-order counterparts, but the additional silicon burns power and takes up die area. What Nvidia has developed is an in-order architecture that relies on a dynamic optimization program (running on one of the two CPUs) to calculate and optimize the most efficient way to execute code. This data is then stored inside a special 128MB buffer of main memory. The advantage of decoding and storing the most optimized execution method is that the chip doesn't have to decode the data again; it can simply grab that information from memory. Furthermore, this kind of approach may pay dividends on tablets, where users tend to use a small subset of applications. Once Denver sees you run Facebook or Candy Crush a few times, it's got the code optimized and waiting. There's no need to keep decoding it for execution over and over.
This discussion has been archived. No new comments can be posted.

NVIDIAs 64-bit Tegra K1: The Ghost of Transmeta Rides Again, Out of Order

Comments Filter:
  • Let's see if I have this right:
    With the OoOE cpu, the instructions from the code are handled by the cpu to decide what order to process them so you get a faster overall speed.

    With the Project Denver cpu, it's an in-order processor, but it uses software at runtime to decide what order to process the code in and stores that info in a special buffer, but that software is itself ran by the cpu in the first place to make the OoOE decisions.

    This seems to be kind of flaky to me.
    • Re:Is it better? (Score:4, Insightful)

      by wonkey_monkey ( 2592601 ) on Tuesday August 12, 2014 @04:22AM (#47653435) Homepage

      What's flaky about it?

      The advantage of decoding and storing the most optimized execution method is that the chip doesn't have to decode the data again; it can simply grab that information from memory.

    • by paskie ( 539112 )

      So in case of JVM, you'd think it's flaky for the JIT to happen on the same CPU as the one that is executing the code?

      Bear in mind that nowadays, the CPUs don't anymore need to be designed to run even closed source, boxed version operating systems with top performance. The bootloader and kernel can be custom-compiled for the very specific CPU version and won't *necessarily* need the helper.

    • by Anonymous Coward

      I think you're missing an important part. The Order Optimizations are run a separate CPU/CORE. This CPU/core can be shutdown saving power or used executing other threads increasing speed. Remember Transmedia , only made mobile/power saving processor because they could save power by not running the OoOE engine the whole time. This is a great approach to saving power by only doing optimization once and executing several times. One Problem is when the code paths change drastically you need to have some way to

      • by Guspaz ( 556486 )

        Errm, it's a dual-core chip, and there's no third core for running the optimizations. They run on the same CPU cores that everything else does.

        • Errm, it's a dual-core chip, and there's no third core for running the optimizations. They run on the same CPU cores that everything else does.

          It's a dual core chip with one core dedicated to doing the optimizations and the other for running the code.

    • More to the point, if the advantage of switching to in-order is having less silicon (and therefore a smaller power draw), isn't that completely undone by having a whole second CPU in there that makes it twice as large as its predecessor?

    • Re:Is it better? (Score:5, Insightful)

      by loufoque ( 1400831 ) on Tuesday August 12, 2014 @05:05AM (#47653535)

      In-order processors are a better choice as long as your program is well optimized.
      Optimizing for in-order processors is difficult, and not something that is going to be done for 99% of programs. It's also very difficult to do statically.

      NVIDIA has chosen to let the optimization be done by software at runtime. That's an interesting idea that will surely perform very well in benchmarks, if not in real life.

  • by IamTheRealMike ( 537420 ) on Tuesday August 12, 2014 @04:47AM (#47653499)

    Although I know only a little about CPU design, this sounds like one of the most revolutionary design changes in many years. The question in my mind is how well it will work. The CPU can use information at runtime that a static analyser running on a separate core might not have ahead of time, most obviously branch prediction information. OOO CPU's can speculatively execute multiple branches at once and then discard the version that didn't happen, they can re-order code depending on what it's actually doing including things like self-modifying code and code that's generated on the fly by JITCs. On the other hand, if the external optimiser CPU can do a good job, it stands to reason that the resulting CPU should be faster and use way less power. Very interesting research, even if it doesn't pan out.

    • Just look at the various optimizers that optimize assembly code at runtime and see how well they work.
      The idea is old, but it doesn't work that well in practice.

      • Well, you could look at what the Hotspot JVM does which is probably a closer analogy, and it works very well.

        • Well, you could look at what the Hotspot JVM does which is probably a closer analogy, and it works very well.

          But then if you are using a JVM that recompiles code on the fly (or Apple's latest JavaScript engine which actually has one interpreter and three different compilers, depending on how much code is used), the CPU then has to recompile the code again! Unlikely to be a good idea.

          There's a different problem. When you have loops, usually you have dependencies between the instructions in a loop, but no dependencies between the iterations. OoO execution handles this brilliantly. If you have a loop where each it

          • If you have a loop where each iteration has 30 cycles latency and 5 cycles throughput, the OoO engine will just keep executing instructions from six iterations in parallel. Producing code that does this without OoO execution is a nightmare.

            Loop unrolling is hardly a nightmare, it's one of the simplest optimizations and can easily be automatized.

            • Loop unrolling is hardly a nightmare, it's one of the simplest optimizations and can easily be automatised.

              Good luck. We are not talking about loop unrolling. We are talking about interleaving instructions from successive iterations. That was what Itanium expected compilers to do, and we all know how that ended.

      • by Anonymous Coward

        What is known to work is to linearize basic blocks, that is eliminate all forward branches in the code.

        This has traditionally helped a lot. The p4 had the trace cache which did something like this, but it was pretty expensive to do in the procssor itself.

    • If you want to look into revolutionary design changes look into the Mill CPU architecture.

      They've put their lecture series available on the web about their intended architecture - it's kinda a hybrid DSP / general purpose with some neat side steps of contemporary CPU architectures.

      • If you want to look into revolutionary design changes look into the Mill CPU architecture.

        They've put their lecture series available on the web about their intended architecture - it's kinda a hybrid DSP / general purpose with some neat side steps of contemporary CPU architectures.

        Thanks for that... that's very interesting. If it works, you're right, it would be amazing. Also... a CPU designed by Santa Clause?!?! I'm in!

      • I'm hearing a lot of reference to the Mill, but I'm very much unclear on whether they can actually do what they claim to do. The CPU isn't implemented and we don't have real world data on it, as far as I understand. History is littered with revolutionary seeming designs which in practice turned out to have very marginal or non-existent gains due to more frequent then expected edge cases and the like.

        The feeling I get from the Mill is that it sounds a little bit too much like the monolithic/microkernel debat

        • There is a lot of information available on the Mill architecture [millcomputing.com] at this point, and very little reason to doubt its feasibility. Essentially all of the parts have been demonstrated in existing architectures, and the genius is in how they are combined in such a simple and elegant manner. Implementation issues aside, the idea is rock solid, and has too much potential to ignore. Perhaps the layman can not appreciate it, but the architecture has a profound ability to simplify and secure the entire stack of s

    • by taniwha ( 70410 ) on Tuesday August 12, 2014 @06:21AM (#47653699) Homepage Journal

      it's certainly different but not revolutionary, I worked on a core that did this 15 years ago (not transmeta) it's a hard problem we didn't make it to market, transmeta floundered - what I think they're doing here is the instruction rescheduling in software, something that's usually done by lengthening the pipe in an OoO machine - it means they can do tighter/faster branches and they can pack instructions in memory aligned appropriately to feed the various functional units more easily - My guess from reading this article is is that it probably has an LIW mode where they turn off the interlocks when running scheduled code.

      Of course all this could be done by a good compiler scheduler (actually could be done better with a compiler that knows how many of each functional unit type are present during the code generation phase) the resulting code would likely suck on other CPUs but would still be portable.

      Then again if they're aiming at the Android market maybe what;s going on is that they've hacked their own JVM and it's doing JIT on the metal

  • Surely one of the points of OoOE is it can - in theory - take account of whether data is in cache or not when deciding when to do reads? I don't see how a hard coded instruction path can do this.

    • I really wonder about this, too. Perhaps they determined that the common case of a read is one which they can statically re-order far enough ahead of the dependent instructions for it to run without a stall but that doesn't sound like it should work too well, in general. Then again, I am not sure what these idioms look like on ARM64.

      The bigger question I have is why they didn't go all out with their own VLIW-based exposure of the hardware's capabilities. As I recall of the Transmeta days, their problem w

    • by Anonymous Coward on Tuesday August 12, 2014 @06:15AM (#47653679)

      I think the entire point of having 7 micro-ops in flight at any point in time combined w/ the large L1 caches and 128MB micro-op instruction cache is designed to mitigate this, in much the same fashion the shear number of warps (blocks of threads) in PTX mitigates in-order execution of threads and branch divergence.

      Based on their technical press release, AArch64/ARMv8 instructions come in, at some point the decoder decides it has enough to begin optimization into the native ISA of the underlying chip, at which point it likely generates micro-ops for the code that presumably place loads appropriately early s.t. stalls should be non-existant or minimal once approaching a branch. By the looks of their insanely large L1 I-cache (128kb) this core will be reading quite a large chunk of code ahead of itself (consuming entire branches, and pushing past them I assume - to pre-fetch and run any post-branch code it can while waiting for loads) to aid in this process.

      The classic case w/ in-order designs is of course cases where the optimization process can't possibly do anything in-between a load, and a dependent branch - either due to lack of registers to do anything else, lack of execution pipes to do anything else, or there literally being nothing else to do (predictably) until the load or branch has taken place. Depending on the memory controller and DDR latency, you're typically looking at 7-12 cycles on your typical phone/tablet SoC for DDR block load into L2 cache, and into a register. This seems like it may be clocked higher than a Cortex A15 though, so lets assume it'll be even worse on denver.

      This is where their 'aggresive HW prefetcher' comes into play I assume, combined w/ their 128KiB I-cache prefetching and analysis/optimization engine, denver has a relatively big (64KiB) L1 D-cache as well! (for comparison, the Cortex A15 - which is also a large ARM core - has a 32KiB L1 D-cache per core) - I would fully expect a large part of that cache is dedicated to filling idle memory-controller activity with speculative loads to take educated "Stabs in the dark" at what loads are coming up in the code to once again, in the hope of getting some right and mitigating in-order branching/loading issues further.

      It looks to me like they've applied the practical experience of their GPGPU work over the years and applied it to a larger more complex CPU core to try and achieve above-par single core performance - but instead of going for massively parallel super-scalar SIMT (which clearly doesn't map to a single thread of execution), they've gone for 7-way MIMT and a big analysis engine (logic and caches) to try and turn single-threaded code into partially super-scalar code.

      This is indeed radically different to typical OoO designs in that those designs waste those extra pipelines running code that ultimately doesn't need to be executed to mitigate branching performance issues (by running all branches, when only one of their results matters) - where as denver decided "hey, lets take the branch hit - but spend EVERY ONE of our pipelines executing code that matters - because in real world scenarios, we know there's a low degree of parallelism which we can run super-scalar, and we know with a bit more knowledge, we can predict and mitigate the branching issues anyway!"

      Hats off, I hope it works well for them - but only time will tell how it works in the real world.

      Fingers crossed - this is exactly the kind of out of the box thinking we need to spark more hardware innovation. Imagine this does work well, how are AMD/ARM/IBM/Intel/IT going respond when their single-core performance is sub-par? We saw the ping-pong of performance battles between AMD/Intel in previous years, Intel has dominated for the last 5 years or so, unchallenged - and has ultimately stagnated in the past 3 years.

    • If their cache lines are 64 bits, then it's quite possible that successive instructions (based on execution time stamp) are in the same cache line. Remember that this has to improve execution speed most of the time, and not decrease execution speed. As for data caches, I'm not sure - a good prefetcher will help a lot in this.
      This has the possibility to slow down execution speed... I wonder how often and how long the execution of a thread can continue when there's a data cache miss... M

    • There might be some "hints" for microprocessor for the data to cache - if so, those could be added in the generated microcode at some time before they're really needed, increasing the chance to have them available in cache and/or reducing wait time. Of course, I don't know for sure but you could read a value in a register then zero the register. This might be optimized out of microprocessor run (so it won't consume energy to load and then zero the register), but still go through the data fetch engine, so it

  • If I understand this story correctly, the message is that if you get a tablet with this processor, avoid manufacturers who install a lot of bloatware.

  • by GuB-42 ( 2483988 ) on Tuesday August 12, 2014 @05:59AM (#47653653)

    Buffer in the main memory, software that optimize most-used code. It looks like an OS job for me, something that could be implemented in the linux kernel and benefit all CPUs, provided that you have the appropriate driver.

    According to the paper, it looks like biggest novelty is... DRM. The optimizer code will be encrypted and will run in its own memory block, hidden from the OS. It will also make use of some special profiling instructions which could as well be accessible to the OS. Maybe they will but they say nothing about it.

    • DRM? You could call as well a whole CPU to be DRM'd, as apart from the interface documentation (commands and their expected outcomes) a chip is a black box.
    • According to the paper, it looks like biggest novelty is... DRM. The optimizer code will be encrypted and will run in its own memory block, hidden from the OS.

      DRM is already fully supported in ARM processors. See TrustZone [arm.com], which provides a separate "secure virtual CPU" with on-chip RAM not accessible to the "normal" CPU and the ability to get the MMU to mark pages as "secure", which makes them inaccessible to the normal CPU. Peripherals can also have secure and non-secure modes, and their secure modes are accessible only to TrustZone. A separate OS and set of apps run in TrustZone. One DRM application of this is to have secure-mode code that decrypts encrypted v

  • ... that doesn't run Facebook?
    Otherwise, no buy.

  • by Anon E. Muss ( 808473 ) on Tuesday August 12, 2014 @07:28AM (#47653947)

    I think NVidia tied their hands by retaining the ARM architecture. I suspect the result will be a "worst of both worlds" processor that doesn't use less power or provide better performance than competitors.

    In order execution, exposed pipelines, and software scheduling are not new ideas. They sound great in theory, but never seem to work out in practice. These architectures are unbeatable for certain tasks (e.g. DSP), but success as general purpose processors has been elusive. History is littered with the corpses of dead architectures that attempted (and failed) to tame the beast.

    Personally, I'm very excited about the Mill [millcomputing.com] architecture. If anybody can tame the beast, it will be these guys.

    • Looking at Shield Tab reviews, the K1 certainly appears to have the processing power but actually putting it to use takes a heavy toll on the battery with the SoC alone drawing over 6W under full-load: in Anandtech's review, battery life drops from 4.3h to 2.2h when they disable the 30fps cap in GFXBench.

      The K1's processing power looks nice in theory but once combined with its power cost, it does not sound that good anymore.

      • The K1 in the SHIELD Tablet uses standard ARM Cortex-A15 cores, not the Denver CPU cores detailed in this story. Very different beasts.

        • The CPU side might be different but the GPU side remains the same and in GFXBench, the results will likely end up similar, give or take whatever they gain/lose on the CPU.

          If Nvidia wanted to go all-out with this Transmetaism, the logical thing to do would be to put together a custom ART runtime that merges with their online recompiler/optimizer.

  • by Theovon ( 109752 ) on Tuesday August 12, 2014 @07:51AM (#47654085)

    I'm an expert on CPU architecture. (I have a PhD in this area.)

    The idea of offloading instruction scheduling to the compiler is not new. This was particularly in mind when Intel designed Itanium, although it was a very important concept for in-order processors long before that. For most instruction sequences, latencies are predictable, so you can order instructions to improve throughput (reduce stalls). So it seems like a good idea to let the compiler do the work once and save on hardware. Except for one major monkey wrench:

    Memory load instructions

    Cache misses and therefore access latencies are effectively unpredictable. Sure, if you have a workload with a high cache hit rate, you can make assumptions about the L1D load latency and schedule instructions accordingly. That works okay. Until you have a workload with a lot of cache misses. Then in-order designs fall on their faces. Why? Because a load miss is often followed by many instruction that are not dependent on the load, but only an out-of-order processor can continue on ahead and actually execute some instructions while the load is being serviced. Moreover, OOO designs can queue up multiple load misses, overlapping their stall time, and they can get many more instructions already decoded and waiting in instruction queues, shortening their effective latency when they finally do start executing. Also, OOO processors can schedule dynamically around dynamic instruction sequences (i.e. flow control making the exact sequence of instructions unknown at compile time).

    One Sun engineer talking about Rock described modern software workloads as races between long memory stalls. Depending on the memory footprint, a workload could spend more than half its time waiting on what is otherwise a low-probability event. The processors blast through hundreds of instructions where the code has a high cache hit rate, and then they encounter a last-level cache miss and and stall out completely for hundreds of cycles (generally not on the load itself but the first instruction dependent on the load, which always comes up pretty soon after). This pattern repeats over and over again, and the only way to deal with that is to hide as much of that stall as possible.

    With an OOO design, an L1 miss/L2 hit can be effectively and dynamically hidden by the instruction window. L2 (or in any case the last level) misses are hundreds of cycles, but an OOO design can continue to fetch and execute instructions during that memory stall, hiding a lot of (although not all of) that stall. Although it's good for optimizing poorly-ordered sequences of predictable instructions, OOO is more than anything else a solution to the variable memory latency problem. In modern systems, memory latencies are variable and very high, making OOO a massive win on throughput.

    Now, think about idle power and its impact on energy usage. When an in-order CPU stalls on memory, it's still burning power while waiting, while an OOO processor is still getting work done. As the idle proportion of total power increases, the usefulness of the extra die area for OOO increases, because, especially for interactive workloads, there is more frequent opportunity for the CPU to get its job done a lot sooner and then go into a low-power low-leakage state.

    So, back to the topic at hand: What they propose is basically static scheduling (by the compiler), except JIT. Very little information useful to instruction scheduling is going to be available JUST BEFORE time that is not available much earlier. What you'll basically get is some weak statistical information about which loads are more likely to stall than others, so that you can resequence instructions dependent on loads that are expected to stall. As a result, you may get a small improvement in throughput. What you don't get is the ability to handle unexpected stalls, overlapped stalls, or the ability to run ahead and execute only SOME of the instructions that follow the load. Those things are really what gives OOO its adva

    • This is a good post (the point about hiding memory latency in particular), but you should still wait to judge the new chip until benchmarks are posted.

      If you have ever worked on a design team for a high performance modern CPU, you should know that high level classifications like OOO vs In-Order never tell the whole story, and most real designs are hybrids of multiple high level approaches.

    • by iamacat ( 583406 )

      One critical piece of information which is available JUST BEFORE time and not much earlier is which precise CPU/rest of device the code is running on! I don't buy that an OOO processor can do as good of a job optimizing for than in real time than a JIT compiler that has 100x time to do its work. If a processor has cache prefetch/test instructions, these can be inserted "hundreds of cycles" before memory is actually used. OOO can work around a single stall, but how about a loop that accesses 128K of RAM, wit

      • by Theovon ( 109752 )

        Prefetching instructions hundreds of cycles ahead of time have to be highly speculative and therefore are likely to pull in data you don't need along with missing out on some data you do need. If you can improve the cache statistics this way, you can improve performance, and if you avoid a lot of LLC misses, then you can massively improve performance. But cache pollution is as big a problem as misses because it cause conflict and capacity misses that you'd otherwise like to avoid.

        Anyhow, I see your point.

    • by Rockoon ( 1252108 ) on Tuesday August 12, 2014 @09:56AM (#47654933)

      So it seems like a good idea to let the compiler do the work once and save on hardware. Except for one major monkey wrench: Memory load instructions

      Thats not the only monkey wrench. Compilers simply arent good enough in general, and there is little evidence that they could be made to be good enough on a consistent basis because architectures keep evolving and very few compilers actually model specific architecture pipelines...

      This is why Intel now designs their architectures to execute what compilers produce well, rather than the other way around. Intel would not have 5 asymmetric execution units with lots of functionality overlap in its latest CPU's if compilers didnt frequently produce code that requires it...

      Which leads to compiler writers spending the majority of their effort on big picture optimizations because Intel/etc are dealing with the whole low level scheduling issues for them... the circle is complete.. its self-sustaining.

    • by AmiMoJo ( 196126 ) *

      I can only assume that Nvidia's engineers are aware of all this, since it is pretty basic stuff when it comes to CPU design really, and that TFA is simply too low on detail to explain what they are really doing.

      Maybe it is some kind of hybrid where they still have some OOO capability, just reduced and compensated for by the optimization they talk about. It can't be as simple as TFA makes out, because as you say that wouldn't work.

    • I think your generalization of static scheduling performs poorly on a Mill. :) The Mill architecture [millcomputing.com] uses techniques which essentially eliminate stalls even with static scheduling, at least to about the same extent that an OOO can. Obviously, there will be cases where it will stall on main memory, but those are unavoidable on either. See the Memory talk in particular for how the Mill achieves this, and other improvements possible over OOO. The entire series of videos is fascinating if you have time, but

      • by Theovon ( 109752 )

        I've heard of Mill. I also tried reading about it and got bored part way through. I wonder why Mill hasn't gotten much traction. It also bugs me that it comes up on regular google but not google scholar. If they want to get traction with this architecture, they're going to have to start publishing in peer-reviewed venues.

      • by Theovon ( 109752 )

        I looked at the Mill memory system. The main clever bit is to be able to issue loads in advance, but have the data returned correspond to the time the instruction retires, not when it's issued. This avoids aliasing problems. Still, you can't always know your address way far in advance, and Mill still has challenges with hoisting loads over flow control.

        • One might expect that, but the Mill is exceptionally flexible when it comes flow control. It can speculate through branches and have loads in flight through function calls. The speculation capabilities are far more powerful, and there are a lot of functional units to throw at it. There will be corner cases where an OOO might do slightly better, but in general the scales should be tipped in the other direction. If anything, the instruction window on conventional hardware is more limiting.

          Papers would be

          • by Theovon ( 109752 )

            Peer-reviewed venues don't reject things that are too novel on principle. They reject them on the basis of poor experimental evidence. I think someone's BS'ing you about the lack of novelty claim, but the lack of hard numbers makes sense.

            Perhaps the best thing to do would be to synthesize Mill and some other processor (e.g. OpenRISC) for FPGA and then run a bunch of benchmarks. Along with logic area and energy usage, that would be more than good enough to get into ISCA, MICRO, or HPCA.

            I see nothing about

  • Scalar design just simply attach more cache... more hits and speculative loads (/MMU) solved it for SPARC/MIPS/Power

    The HP research into Dynamo and later the transmeta design concepts showed promise but delivered no product beyond small samples (under 1 million shipped) and yet peoples houses...

    I was most excited by dynamo and VLIW (itanium promised so much and delivered so little) LLVM provides some interesting concepts

    I would really like Texas Instruments (TI) back in the game as I think a

  • "There's no need to keep decoding it for execution over and over."

    So kinda in the bed with a combo ReadyBoost/XP-precache.

    Why not just use ReadyBoost/precache and make it work the same freaking way?

  • Why not have all applications ship in LLVM intermediate format and then have on-device firmware translate them according to exact instruction set and performance characteristics of the CPU? By the time code is compiled to ARM instruction set, too much information is lost to do fundamental optimization, like vectorizing loops if applicable operations are supported.

  • Suppose for a moment that you are building a new processor for mobile devices.

    The mobile device makers - Apple, Google, and Microsoft -- all have "App Stores". Side loading is possible to varying degrees, but in no case is it supported or a targeted business scenario.

    These big 3 all provide their own SDKs. They specify the compilers, the libraries, etc.

    Many of the posts in this thread talk about how critical it will be for the compilers to produce code well suited for this processor...

    Arguably, due to the app development toolchain and software delivery monoculture attached to each of the mobile platforms, it is probably easier than ever to improve compilers and transparently update the apps being held in app-store catalogs to improve their performance for specific mobile processors.

    It's not the wild west any more; with tighter constraints around the target environment, more specific optimizations become plausible.

    • No one needs to do anything for software to run on these at all. nVidia would be developing a kernel module or something that would JIT existing software into their optimized in-order pipeline, then cache that result. The out-of-order architectures all do this too - in hardware (which uses more power maybe, but also executes more quickly and theoretically gets into sleep mode more often).

      There's no need for anyone to generate special code for these CPUs, but it is interesting that a common perception is tha

  • Once Denver sees you run Facebook or Candy Crush a few times, it's got the code optimized and waiting.

    I am so fortunate to live in such an advanced age of graphics processors, that let me run the equivalent of a web browser application and a 2D tetris game. What progress! We truly live in an age of enlightenment!

  • Does this architecture require us to load the "NVidia processor driver" which comes with 100 megabytes of code specializations for every game shipped?
    That is, after all, why their graphics drivers perform so well - they patch all shaders on top-end games...

Genius is ten percent inspiration and fifty percent capital gains.

Working...