Nvidia Mulls Cheap, Integrated x86 Chip 211
CWmike writes "Nvidia is considering developing an integrated chip based on the x86 architecture for use in devices such as netbooks and mobile Internet devices, said Michael Hara, vice president of investor relations at Nvidia during a speech that was webcast from the Morgan Stanley Technology Conference this week. Nvidia has already developed an integrated chip called Tegra, which combines an Arm processor, a GeForce graphics core and other components on a single chip. The chips are aimed at small devices such as smartphones and MIDs, and will start shipping in the second half of this year. 'Tegra, by any definition, is a complete computer-on-chip, and the requirements of that market are such that you have to be very low power and very small but highly efficient,' Hara said. 'Someday, it's going to make sense to take the same approach in the x86 market as well.'"
oh god, please no. (Score:2, Insightful)
For those of us who dealt with intel's "integrated" graphic cards on laptops for the past several years now... on their behalf I just want to say PLEASE FOR THE LOVE OF ALL THINGS SHINY AND SILICON, DON'T DO IT! Anything with the word "integrated" near it makes me want to cringe... it's a post traumatic stress response caused by watching a myriad of good video games shutter, blink, crash, and burn right in front of me. It's a black day indeed when Warcraft 3 can't run at full resolution on a laptop produced
Re: (Score:3, Informative)
Heh, funny that you mention it.
I run WC3 on my Macbook Pro 1.5 year old and I use 1024x768, medium, high, medium, high, high, on, on, high. So medium models and textures ...
And I agree, especially in this price range :D
Re: (Score:2, Informative)
Are you sure that you're running a Universal Binary version? If not, then your CPU has to emulate PPC instructions. Get the latest updates from Blizzard and your experience should be a lot better.
Re: (Score:2)
Well, since we talked about it I raised res to 1440x900 all the same (medium textures and models), and it lagged like crazy even when there was only heroes + like 5 units each or so during early rush.
Though then I play 3on3 RT and in OS X, I'm confident it would run smooth in Windows. I think my 6800 LE may have run better in Windows than this 8600m GT in OS X.
Well, my 8600m GT doesn't come close so no, no change in hell 9400m would do it.
I don't remember where to see if it's PPC or Intel version but I'm qu
Re:oh god, please no. (Score:4, Informative)
Um, wat? I have the same model you do (it's the santa rosa MBP with the 8600m GT yes?) and it has no problem running WC3 at full res and everything maxxed. Anything less and your computer probably has something wrong with it.
I can run WoW in dalaran (for those not familiar with the game, the busiest city) on a packed server or do a full 25 man raid with everything but view distance maxxed and view distance at around 1/3 of max and still average ~30+ FPS. If I go any higher on distance I need to lower most other settings, as I think thats when all the various armor/player/model/building/etc textures start causing the 256 mb of graphics ram to have to swap out and things start getting shitty. WC3 is much less graphically intense than that even if you've got two huge armies going at it.
Maybe an early sign of this: http://support.apple.com/kb/TS2377 [apple.com]
I got that but Apple fixes it for free warranty or not since its Nvidia's manufacturing problem (my understanding is its the same problem (conceptually) as the RRoD only on your laptop).
Re: (Score:2)
Anything with the word "integrated" near it makes me want to cringe...
But then again Apple claim the new Mac Mini with 9400M and shared RAM can run the latest games ..
Reality distortion field [checked]
Re: (Score:2)
But while I'm already burning karma maybe I should point out this:
http://www.nvidia.com/object/product_geforce_9100m_g_mgpu_us.html [nvidia.com]
So motherboard GPU GeForce 9100M G + SLI connected discrete GPU 9200M/9300M GS = 9400M.
So it seems like it's not an improved 8400m, and the 9300M GS is slower than the 8400M GS (which is slower than 8400M GT:)
http://www.nvidia.com/object/geforce_9100m_g_mgpu.html [nvidia.com]
Faster than X4500 but still suck for anyone who actually need any graphics power and to claim that it will run the lat
ah gamers... (Score:5, Funny)
My Intel 855GM handles xterms very well, recently they have become very wobbly slimey when I drag them around in Gnome, other than that everything is fine with my integrated chip.
Re:oh god, please no. (Score:5, Funny)
Yes, because what I want to do is slot a PCIe card into my damn cell phone.
Re:Well, that is what netbooks do (Score:5, Interesting)
Or external PCIe. I've been waiting for that. The PCIE standard has it specified, just nobody wants to make stuff for it. Think of it this way, you come home, you plug in a box (with its own PSU) into your laptop, and you can now game on your laptop with whatever cards you had put in that box. When you're done, unplug everything, switch your resolution/drivers if necessary, and go.
Re:Well, that is what netbooks do (Score:4, Informative)
Dude, external PCIe is available in laptop for years, it is called ExpressCard. And suprise, it's even used for external graphics: http://www.tomshardware.com/reviews/vidock-expresscard-graphics,1933.html [tomshardware.com]
Re:oh god, please no. (Score:4, Insightful)
It's being designed for netbooks, which aren't typically designed for gamers.
Fortunately, the one good thing that's come from Vista is that now almost all new computers come with decent graphics cards.
I hated looking for new laptops that were $800 and finding out they had integrated graphics, then being forced to pay for the "premium" product tier to get discrete graphics, which included a much more expensive processor and RAM.
With a desktop, you can just buy a $500 PC at Walmart and drop in a decent graphics card.
Re: (Score:2)
With a desktop, you can just buy a $500 PC at Walmart and drop in a decent graphics card.
With a desktop PC, it's also a lot harder to move the PC from your home office to the TV room when you want to play games on the 32" flat screen. But then, I guess you could buy a second desktop PC for the TV room with the money you save vs. a laptop PC.
Re: (Score:3, Insightful)
Or you could get a 30" flat screen for your desktop, a nice audio system, and a comfortable chair and not have duplicate media setups.
For bonus points, put a couch behind your chair & move the chair out of the way when you have guests.
Re: (Score:2)
Re: (Score:2)
[A 15 meter DVI cable] takes care of video. Now what about audio, keyboard, mouse and game controllers?
Audio is easy; USB is harder, as USB cables can't be longer than 5 meters. You can use a chain of five hubs in a row, but that gets expensive, and at least some of your hubs will have to be self-powered (that is, using their own wall wart instead of sucking 5 V from upstream). In fact, USB.org [usb.org] recommends something that amounts to tunneling USB over Ethernet for long distances. But by far, the hardest step is drilling holes in your walls, especially if you live in a state with a restrictive electrical code o
Re: (Score:2)
http://www.iofast.com/product_info.php/products_id/3452
It probably will use an Intel CPU core (Score:2)
Re: (Score:2)
Anything with the word "integrated" near it makes me want to cringe...
So you don't want to use any silicon chip then?
http://en.wikipedia.org/wiki/Integrated_circuit [wikipedia.org]
Re: (Score:3, Funny)
Re: (Score:2)
Frozen Throne runs fine on my stinking EEE PC 900HA. And it has a three-generation-old, under-clocked Intel GMA 950.
Frozen throne should run great on a GMA X4500. Even WoW runs OK on a GMA X4500.
Re: (Score:2)
I think graphics with an integrated motherboard will fare much better than the opposite.
Re: (Score:2)
Warcraft 3 requires something like a Voodoo 3 card. Even the slowest integrated gfx chips today outperform that by something like 5-10 times.
In fact, you could get faster frame rates with a dual core CPU doing ALL the rendering into the frame buffer.
There's something wrong with your laptop.
Re: (Score:2)
I have to say, WC3 runs happily on my year-old MacBook.
Re: (Score:2)
Not all computers are for gamers.
Re: (Score:2, Insightful)
Prediction.. (Score:5, Funny)
Re:Prediction.. (Score:5, Funny)
Wow, do you even know anything about x86?
Re:Prediction.. (Score:4, Funny)
Re:Prediction.. (Score:5, Interesting)
Re:Prediction.. (Score:5, Funny)
Wow, now all we need is to connect the GPU to the FSB/QPI, make it support pagetables, interrupts, DMA, CPU-style L1/2/3 coherent cache, memory controller with synchronous fencing, legacy and long modes for pointers and instructions, etc.... and then we'll have something that can possibly emulate an x86 CPU at only 99.9% performance penalty!!
Or, you know, not.
Re: (Score:2)
If we could somehow integrate the photon torpedoes to initiate a reverse tachyon beam, I think we might have a real challenger on our hands.
Too much Star Trek?
Re: (Score:2)
Interrupts, DMA, memory control, etc. are NOT implemented in software, they require cooperation from the motherboard at minimum. Your GPU would need significant hardware extensions to be able to function as a CPU sufficiently that a BIOS could use it to bootstrap and access host hardware directly. At that point it may as well be a CPU.
There's a "cycle of reincarnation" with hardware. First new hardware is created to meet new requirements, then as it is refined, optimised and physically shrunk, it is integra
Re: (Score:2)
However, GPUs are a little different. The programming model of a GPU is as follows;
Load in a block of code that performs some kind of mathematical operation (known as a kernel).
Specify the block of data to run the kernel against and the block of data to put the output.
Run the kernel.
For an x86 program that typically cons
Re: (Score:2)
Re: (Score:3, Funny)
Not just x86, but this guy clearly doesn't know anything about CPU/GPUs at all. Kinda like an old friend of mine, convinced he was going to design his own CPU, despite not knowing anything about computer hardware. Or even computer software. Sure could play games though, and smoke a lot of dope.
Re: (Score:2)
Re: (Score:2)
VIA's got a pretty strong CPU; the Nano holds its own for the low-power segments.
An nVidia-VIA partnership would have worked wonders but nVidia went there and came back. For some reason they wanted to do ION instead. What a sick joke; and here I was hoping for a VIA Nano wiht a 9400 chipset.
Re: (Score:2)
There are perfectly valid patent
Re: (Score:2)
Although you are correct in a moral indignation sense, sadly you are wrong in an actual reality sense. Instruction sets have been patentable for may years, and form a large part of the Intel / AMD war-chest that they can threaten impudent upstarts with. The standard trick that gets used (and has been defended in court) is to claim that the instruction is a method of encoding a particular process. Once you reduce things to a process a patent examiner, and then a court are more likely to go along with you.
Re: (Score:2)
You forgot the part about it somehow, some way, being litigated to death and never seeing the light of day.
If they bail out, then the headline will (Score:4, Funny)
read:
"Nvidia NULLS Cheap, Integrated x86 Chip "
Re: (Score:3, Funny)
Netbooks? (Score:5, Funny)
Re: (Score:2)
I have always liked "subnote" better than "netbook." I think it describes these things better.
Why? You're in the navy?
I don't see the point. (Score:4, Interesting)
Surely a better design is to produce a series of very small, highly specialized, very fast cores on a single piece of silicon, and then have a layer on top of that which makes it appear to be an x86, ARM or whatever.
One reason for having a bunch of specialist cores is that you don't have one core per task (GPU, CPU or whatever), but rather one core per operation type (which means you can eliminate redundancy).
Another reason is that having a bunch of mini cores should make the hardware per mini core much simpler, which should improve reliability and speed.
Finally, such an approach means that the base layers can be the same whether the top layer is x86, ARM, PPC, Sparc or a walrus. NVidia could be free to innovate the stuff that matters, without having to care what architecture was fashionable that week for the market NVidia happens to care about.
This is not their approach, from everything I'm seeing. They seem to be wanting to build tightly integrated system-on-a-chip cores, rather than having a generic SoaC and an emulation layer. I would have thought this harder to architect, slower to develop and more costly to verify, but NVidia aren't idiots. They'll have looked at the options and chosen the one they're following for business and/or technical reasons they have carefully studied.
If I was as bright as them, why is it that they have the big cash and I only get the 4 digit UID? Ergo, their reasoning is probably very sound and very rational, and if presented with my thoughts could very likely produce an excellent counter-argument to show why their option is logically superior and will produce better returns on their investments.
The question then changes as follows: What reasoning could they have come up with to design a SoaC unit the way they are? If it's the "best" option, although demonstrably not the only option, then what makes it the best, and what is it the best at?
Re:I don't see the point. (Score:5, Funny)
>Finally, such an approach means that the base layers can be the
>same whether the top layer is x86, ARM, PPC, Sparc or a walrus.
So much for running linux on it!
BIOS ERROR.
NO OPERATING SYSTEM FOUND.
THE WALRUS HAS EATEN THE PENGUIN.
hawk
Re: (Score:2)
Surely a better design is to produce a series of very small, highly specialized, very fast cores on a single piece of silicon, and then have a layer on top of that which makes it appear to be an x86, ARM or whatever.
Yes, they call that a modern x86 CPU.
They don't create the x86 instruction set in hardware anymore. They just have a translation layer in hardware that takes the x86 code and runs it on another type of hardware (usually a RISC core).
The internal execution core of this type of CPU [a modern x86] is actually a "machine within the machine", that functions internally as a RISC processor but externally like a CISC processor. The way this works is explained in more detail in other sections in this area, but in a nutshell, it does this by translating (on the fly, in hardware) the CISC instructions into one or more RISC instructions. It then processes these using multiple RISC execution units inside the processor core.
http://www.tek-tips.com/faqs.cfm?fid=788 [tek-tips.com]
Incidently CISC had a big advantage over RISC. Each instruction typically did more and so for a given program a CISC computer will typically use less code. Saving cache, memory and bandwidth. So modern x86 CPUs have the advantage of
Re: (Score:2)
Modern CPUs perform branch prediction (partly guided by the binary itself) and will import a weighted portion of the possible code following a branch. There is still a cost to branching of course but it is far lower than it was in the old days - it's down to barely more than a static jump. Also, that's why Profile Guided Optimisation makes such a big difference even to an already "optimised" binary.
Can you just imagine?! (Score:4, Funny)
They should just go with ARM (Score:5, Interesting)
They should just push ARM heavily. ARM is doing great right now. Companies like Texas Instruments are pushing the architecture heavily, and there's high demand.
Linux ARM support is blasting ahead, thanks to projects like the Beagleboard.
On top of that, a while ago Microsoft said they were developing an ARM version of Windows. Although we won't see it right away, in a couple years that'll open up even more options.
If they push ARM hardware heavily enough, software will follow. Heck, the software is already coming along, so they just have to market the hardware properly.
Most people won't know the difference between a linux MID and a windows MID. Both have "Email", "Instant Messenger", "Calendar", "Web Browser", etc., and if you need a new program you just download it... Nobody would even think of installing software off a CD, so most "Why won't this work?" scenarios won't even come up. It'll just look slightly different.
And once a couple game devs follow - or heck, a program like Google Earth - it won't be long before oodles of software is being ported, and the ARM-x86 barrier breaks down.
Re: (Score:2)
Re: (Score:2)
Don't market mhz, then. Market capabilities.
If the ARM laptop has "plays 1080p" slapped on the front, and the Atom laptop has "plays 720p" slapped on, which do you think the consumer will buy? What if the ARM laptop has a "16 hour battery life", and the Atom laptop has a "9 hour battery life"? What if the ARM laptop costs $50 less?
I just hope the salespeople educate them a bit on the differences. :P Linux vs Windows needs to be explained... as Dell seems to have pointed out.
Re: (Score:2)
If they push ARM hardware heavily enough, software will follow.
(1) It doesn't matter how hard they push ARM, all the legacy x86 software won't magically work.
(2) I don't think Nvidia has what it takes to push Microsoft hard enough to get a mainstream version of Windows or Office on ARM, and without Windows goes a huge number of people who refuse to try something different.
(3) Nvidia's strength - the GPU - is best used in games. Without focus there, there is no reason to go with Nvidia's solution over Intel's or AMD's or Via's. Without Windows that just isn't go
Re: (Score:2)
Nvidia's GPU also has VDPAU [wikipedia.org] which means that the GPU can do nearly all of the work decoding video, even HD x.264. So pair it with a weak little power-efficient cpu and you have an excellent video player. Intel can't do that.
Re:They should just go with ARM (Score:5, Interesting)
The reason to go with x86 is because ARM is just as shitty of an architecture.
Seven supervisor modes now? Horrible page table format? Have you seen what they are planning for 64-bit addressing?
Even more importantly than the CPU architecture, the ARM busses are typically very low performance. And if most of the time is dealt with memory movement, having a better bus dwarfs what's going on with the CPU.
So, in the end, you have slow cores. Intel knows how to make x86 fast. And, as they are starting to show, they can make it low power also. ARM has yet to show a fast core. They don't use that much power, but if "netbooks" are low end laptops instead of high end cell phones, a few watts is fine.
Oh, and did I mention that x86 cores are x86 compatible? That makes the software barrier to entry a lot lower.
To compete with Intel, you have to be better. A lot better. For very low end, ARM is better, because all that matters is leakage power, and after that all that matters is power for very small processing. At a higher level of performance, ARM is different, but perhaps not better. Maybe the ARM architecture has some features which make it less complex to implement than x86. But at the end of the day if nobody is making ARM cores that spank x86 cores, x86 will win. Didn't you learn this from PowerPC? Don't you realize the same thing will probably happen to ARM except at the extremely low end? And even there, if Intel decides to start licensing 386 synthesizeable cores, how long do you think ARM7 and ARM9 will last?
Re:They should just go with ARM (Score:5, Interesting)
Yes, the ARM architecture is horrible and slow - but it also integrates really easily with other kinds of chips.
How long have we had ARM SoCs with CPU, GPU, MMU, plus a dozen other chips all in a single chip? An ARM "CPU" (SoC) isn't just the CPU part. It also has dozens of other chips inside it for accelerating specific types of processing, and all with remarkably low power consumption.
ARM is less complex than x86. Both ARM and x86 are moving towards integrating more and more stuff on a single die. Which do you think will work better - the simpler architecture (though not vastly simpler) with rapidly improving speeds, or x86? ARM has more experience in this area. They'll win.
You say to compete with Intel "you have to be better", but your opinion on what makes a CPU better is flawed. Power6 stomped Intel for performance. Even today for FPU stuff it's still about 100% faster than Core i7(per ghz - and it scales up to +5ghz on air), and I don't see it dominating the market at all!
ARM will win for these reasons:
-Lower cost.
-Lower power consumption.
-Much smaller size. (smaller devices appeal to many people)
-Similar/better performance for specific tasks(like video decoding/recording).
-Efficient software base.
-Appealing to device manufacturers.
Yes, x86 is compatible with everything under the sun, but everything under the sun is incredibly inefficient, and designed to run on desktop dual/quad-core systems.
You're arguing about what the consumers want, but you're thinking like a techy. If you put an x86 program next to a well-coded ARM program, they'll both run just as responsively, and at the end of the day, to end users, responsiveness is what determines "speed".
x86 may "spank" arm, but consumers think Vista is "slow" because it takes 30 seconds to delete a file that took 0.5 seconds in XP, and it requires more RAM. They don't give a shit that the kernel may be 5% more efficient. :P They don't care that they have a 2.6ghz dual-core CPU rather than a 2.6ghz single-core CPU, if it feels slower than before. (because of flaws with software)
All this puts the importance on software quality rather than the hardware. But software is easy, for ARM. ARM has no super-fast desktop line that would spur the growth of inefficient crapware.
Don't you feel lucky that we are to have tons of open source developers making quality software that runs on ARM devices? And piles of device manufacturers ready to push linux/FOSS software on these devices?
Too bad there's so few x86 device manufacturers pushing linux/FOSS. More support and demand would really spur growth of efficient software for netbooks and the like - but we do have Dell, I guess. :P
Re: (Score:2)
only x86 has already been there more than a decade ago. remember cyrix media gx?
Re: (Score:2, Interesting)
On top of that, a while ago Microsoft said they were developing an ARM version of Windows.
They already have one. It's called Windows CE.
At my work we dev .net on win CE on arm. It works quite well believe it or not.
nVidia releases ... (Score:2)
This just in, nVidia announces world's first netbook to require not one but two separate AC adapters at all times. Other features include built-in vacuum cleaner noise generator, and thermal pubic hair remover.
A little off topic but I want to know (Score:2)
It seems that everything is moving in the direction of operational efficiency. More instructions per cycle, less power draw, faster and more efficient buses among processor, memory and peripheral devices are among important issues being focused on.
But what ever happened to Moore's law? Are we already outside of its prediction? Has the chain been broken? I thought we would all have 5GHz machines running ice-cold by now but some of the latest and greatest stuff is a mere 1.6GHz atom processor based sub-no
Re: (Score:3, Interesting)
But what ever happened to Moore's law? Are we already outside of its prediction? Has the chain been broken?
Effectively, yes. The problem is not cost per gate and wafer real estate per gate, which continue to decrease. It's heat dissipation per unit area. I've been to semiconductor talks where there are charts of increasing heat dissipation with lines marked "room temperature", "soldering iron", "nuclear reactor", and "surface of sun". The trend is clear and not encouraging.
The effect is that comput
They should open the drivers first (Score:3, Insightful)
Problem is that more and more netbooks are sold with linux, and NVIDIA drivers integration in any distro is less than stellar. Contrast that with Intel hardware where everything is well supported by all vendors.
Unless they open their drivers, this platform will be Windows-only so even their lower-end models will be hampered by the Windows Tax.
That won't go very far.
Re: (Score:2)
not only that but what exactly do you think your handheld/phone is going to do with more than 4 gig of ram per process?
Re:x86? (Score:4, Funny)
not only that but what exactly do you think your handheld/phone is going to do with more than 4 gig of ram per process?
Nothing as 640k is enough for any phone.
Re: (Score:2)
"640K should be enough for anybody", updated for 2009.
Plus, TFS specifically mentions netbooks as part of where the chips would be used, which are more likely to make that particular feature likely than handhelds/phones, as the use cases for those are pretty much those for a general purpose laptop -- but with more focus on battery life (hence, power consumption), weight, and cost. I suspect
Re: (Score:2)
***WOOOOOOSHHHH***
Wow, did Chuck Norris just go by, or did you miss a joke?
Re: (Score:2)
Re: (Score:2)
From what I can find [vzw.com], that device has 196 MB RAM and 1 GB of onboard storage (this is in addition to the 128 MB of flash for the OS and the microSD slot).
This is different than having 1 GB of byte-addressable DRAM.
Re: (Score:3, Interesting)
The demarcation of storage and RAM is a legacy constraint forced by hardware limitations. Ubiquitous 64-bit and SSD will blur and eventually totally eliminate this separation.
Re: (Score:2)
Bullshit. The difference between flash storage and RAM is hardly a legacy constraint; the hardware limitations are very significant.
Flash devices are many orders of magniture slower than DRAM. Flash devices have very limited write cycles, whereas DRAM does not. Flash devices operate on large (erase) block sizes - starting at 64KB.
Re: (Score:2)
Sure is lack of imagination in here...
Re: (Score:2)
*sigh*
You know, I get the same, or similar, response every time I talk to a proponent of any type of bullshit.
"You don't see how Elvis could still alive? Gee, you sure are closed minded ..."
Accusing people of being closed-minded or unimaginative seems to be the equivalent of saying "I have no evidence and no clue what I'm talking about, so I'm going to attack you just so you'll shut up".
Re: (Score:2)
Look man, extrapolating SSD into an argument about flash vs SDRAM just shows a lack of imagination. I'll see you in ten years.
Re: (Score:2)
No, it shows that the poster that you are arguing with understands the different physical constraints between volatile and persistent storage. He also understands that we can make external storage look like RAM already, and the reason that we don't do so is that abstraction would kill performance for most applications that need to be aware of the distinction.
You have nothing in response other than "blah blah imagination I'll be right in ten years". Yeah, course you will sunshine. As we progress down the fab
Re: (Score:2)
Who said anything about flash-based SSD? I said SSD as in SOLID-STATE NON-VOLATILE STORAGE. Regardless, flash now is faster than CPU registers were not so long ago.
Re: (Score:2)
I never said that SSD would eliminate RAM either, which in turn will not eliminate on-die cache or registers in the forseeable future. When you've got a practically infinite address space you may as well use it.
Re: (Score:2)
True, but Level 1, 2, and 3 CPU caches are all much faster than DRAM too, and yet they share the same address space. Theoretically, you could just make the SSD byte addressable and have your "main memory" DRAM act as a Level N+1 cache for the SSD. If you want a file system you run a RAM disk :-) That would make systems come up much faster from a power-off mode, but system resets would be more challenging to pull off.
Alternatively, you could also think of it as the SSD being a big persistent swap partition
Re: (Score:2)
You can by SSD now which consists purely of the same RAM sticks as on your motherboard. This is the idea that I'm getting at. So how do we handle it? Seems a bit wasteful to limit that massive bandwidth through legacy storage interfaces, so put it on the RAM bus. Also a bit redundant to double-handle it via antiquated filesystem IO, so map it directly.
See where it's going...
Re: (Score:2)
Back it up to floppy.
Re: (Score:2)
Easy - you already have a separation between volatile and non-volatile storage in all operating system kernels. It doesn't matter whether the underlying technology is the same - in fact you can deliberately blur the line by using part of your RAM as a virtual disk (tmpfs + file + losetup in Linux, memory-backed vnode in FreeBSD).
Re: (Score:2)
That's not quite true - the CPU's MMU handles some housekeeping on used RAM. However that's none of the motherboard's business, especially with memory controllers being integrated into the CPU these days. And even that information is far too limited to implement page replacement, which is still the responsibility of a sophisticated kernel.
Re: (Score:2)
I would be brave enough to say that in the not-too-distant future, 1GB storage will be about as common as 1KB RAM chips are now.
Re: (Score:3, Funny)
Why botther at all? Better go straight to x64, I mean, even the lowliest of nvidia GPUs is already 64 bits, why bother with 32 bits technology?
They day an embedded system's CPU needs to address more than 4 gigs of memory (which is essentially why you would shift from a 32-bit to 64-bit CPU) is the day my shit turns purple and smells like rainbow sherbet.
Re: (Score:2)
Why botther at all? Better go straight to x64, I mean, even the lowliest of nvidia GPUs is already 64 bits, why bother with 32 bits technology?
They day an embedded system's CPU needs to address more than 4 gigs of memory (which is essentially why you would shift from a 32-bit to 64-bit CPU) is the day my shit turns purple and smells like rainbow sherbet.
Is 640K enough for you still?
Re: (Score:3, Funny)
Get back to him in a couple of years for something really weird.
Re: (Score:2)
The iPhone packs 128 meg, as does the BlackBerry Bold. A modern smartphone packs as much ram as the average desktop did a little under a decade ago. SODIM
limits of 32 bits (Score:2)
I ran into a limit with 32 bits more than 10 years ago.
My Fortran compiler was Cray derived (Lahey, iirc), and I had dynamically allocated a huge array. They were in some way bit-addressed, leading to a crash.
I turns out that my adviser's machine had more memory (512Mb) than any of their own test machines.
The workaround at the time was static allocation, which made the code faster, anyway.
hawk
Re: (Score:2)
What about a NAS box? I'm considering building one using commodity PC parts, and when you can have a 4GB disk cache for $40, it's tempting. Though on the other hand, my desktops are kind of dated, and having a NAS box with twice or more ram of any computer it would be serving files to would be completely redicilious.
Re:x86? (Score:5, Informative)
Re: (Score:3, Insightful)
x64 is a Microsoft marketing term. Please stop using it. The architecture is x86-64.
Pray tell, why not amd64 then - the way it was originally marketed by the inventor when it was released?
Last I checked, there's no definite established term [wikipedia.org] for this, anyway, and x64 is the shortest while still being vendor-neutral. Even if Microsoft came up with it first (and are you sure they did, really?), so what? I don't understand how it is a "marketing term" for them, as they don't market it.
Re: (Score:2)
Or, while everyone is nitpicking...
Why not refer to it by "AMD's original designation for this processor architecture, 'x86-64'"?!
Re: (Score:2)
x64 probably started as a verbal colloquialism. x64 rolls off the tongue much easier than x86-64.
Re: (Score:3, Funny)
x86 is more than x64, so it's better right?
Re: (Score:2)
No, the number represents how far it is from perfection. Perfection is when there is nothing left to take away. The ultimate CPU will be x0. I'm going to trademark it ahead of time.
Re: (Score:2)
These go to eleven.
Re:x86? (Score:4, Insightful)
Tell me, if they announced an intention to do a SPARC core, would you assume they meant a 32-bit version? How about POWER?
x86 is just as 64-bit as they are.
Re: (Score:2)
Well if you're talking about SPARC it would be the x64%4 architecture.
Re: (Score:2)
The Nvidia GPUs are large MIMD vector machines - look at the specs, and what they're doing with CUDA. That they're mostly actually used to draw texels of monsters and walls and bullets in flight doesn't mean that they're not a highly capable general purpose vector processor...
Many people are (almost certainly correctly) stating that Nvidia wouldn't do that if they didn't think they had to, or didn't think that this would make them more money / market penetration than not doing it. Suggesting that sticking
Re: (Score:2)
Sure, Nvidia would love to see the world move away from general purpose processors and focus more on vector processors like their GPUs, but they know that, for the short-term, general purpose processors are a market reality (as a side note, Nvidia executives are also known for making absurdly overstated/arrogent statemtents as well). This decision has nothing to do with parallel processing. In fact, it has nothing to do with advancing the state of the art for anything other than power consumption/cost per