Notebook Makers Moving to 4 GB Memory As Standard 567
akintayo writes "Digitimes reports that first-tier notebook manufacturers are increasing the standard installed memory from the current 1 GB to 4GB. They claim the move is an attempt to shore up the costs of DRAM chips, which are currently depressed because of a glut in market. The glut is supposedly due to increased manufacturing capacity and the slow adoption of Microsoft's Vista operating system. The proposed move is especially interesting, given that 32-bit Vista and XP cannot access 4 GB of memory. They have a practical 3.1 — 3.3 GB limit. With Vista SP1 it seems that Microsoft has decided to fix the problem by reporting the installed memory rather than the available memory."
Article doesn't say what summary says (Score:5, Informative)
The article says: "While first-tier notebook vendors such as Dell, Hewlett-Packard (HP) and Toshiba are planning to roll out 4GB notebooks starting from the first quarter of 2008, the move is expected to give a boost to the DRAM market, according to memory module makers."
The article does not say that this is a deliberate attempt to increase DRAM price. And if it was, wouldn't it be illegal?
Re:Fix the problem by misleading the customer? (Score:5, Informative)
multi(0)disk(0)rdisk(0)partition(1)\WINDOWS="Microsoft Windows XP Professional"
It will not solve Your PR problem nor will solve the problem with incorrect reporting of available RAM, but will allow 32-bit Windows XP Professional to use all of it. In my experience, most programs / games can't use all 4GB of RAM, but if user is running more than one RAM hungry application (multitasks), 4GBs becomes useful.
Also we have to think about future Vista service packs so, 4GB is must have
Re:Can someone explain... (Score:2, Informative)
Re:Fix the problem by misleading the customer? (Score:4, Informative)
Can we get some parental supervision on this site? (Score:4, Informative)
Also, what's with slamming Microsoft over the "slow" transition to 64-bit? 64-bit XP has been out for, like, three years now. It runs 32-bit applications, because the x64 architecture makes it so ridiculously easy you'd have to intentionally break it. 64-bit Linux does the same, because it takes, like, a line of code to do so. If software makers aren't producing 32-bit apps, it's probably because their customers haven't demanded they do so yet; and the customers probably haven't demanded it because it's unusual for a single application to need 4GB of RAM. Finally, those applications that can frequently use gigondo amounts of RAM in a single virtual address space (e.g., Oracle) for the most part had 64-bit binaries available right out of the gate.
Re:That's great (Score:5, Informative)
He may not have said it, but he believed it;
Bill Gates Challenges and Strategy Memo (16 May 1991)
Re:Fix the problem by misleading the customer? (Score:2, Informative)
Re:Fix the problem by misleading the customer? (Score:5, Informative)
That's not such a good idea.
The reason PAE mode isn't enabled by default is because it conflicts with DMA. Enabling it may make your Windows system even more unstable.
Re:That's great (Score:5, Informative)
Running a 64-bit OS, you can access the board's maximum (there aren't any boards that can max out the 40 or 48-bit address space of existing EM64T/AMD64 CPUs) memory.
Running a 32-bit non-Windows OS with PAE enabled, you can access up to 16 GiB (2^36 bytes) of physical RAM.
Running a 32-bit Windows server OS with PAE enabled, you can also access up to 16 GiB of RAM.
However, even with PAE enabled, Windows XP and Vista 32-bit won't let you access anything past 4 GiB, because of some legacy hardware that could barf if it were handed an address higher than 4 GiB.
Re:Can we get some parental supervision on this si (Score:5, Informative)
Re:Can someone explain... (Score:5, Informative)
5 years ago, nobody would have thought that we'd run into this problem at all. Remember those times? Everybody and their mum was just about getting ready to jump onto the 64Bit bandwagon with AMD charging in front. And then, while nobody (especially not AMD) was paying attention, we kinda veered off-course into a multi-core world instead and all of a sudden, people stopped caring about 64bit. After all, you had a larger net performance gain from upgrading to 2 32Bit cores than to one 64Bit one. And now, we're finally running out of address space.
Re:Can we get some parental supervision on this si (Score:5, Informative)
The trouble is that in contemporary chipsets in 32-bit mode the upper 1G or so of physical memory overlaps with the address space for the PCI bus.
Re:Can someone explain... (Score:3, Informative)
Wait a moment and think it out.
Estimate that components such as your processor caches, motherboard I/O destinations, Network cards, CD/DVD drive will take up about 1/2 GB of the theoretical 4 GB. These MUST have addresses or they cannot function.
Now add in all of your vRAM (the amount of ram on your video card), that ram will also need a set of addresses. We'll estimate 256 MB of vRAM.
So now you've taken your theoretical 4 GB of ram space, subtracted 512 MB for essential system components needing addresses, subtracted 256 MB of vRAM on your video card needing addresses. So, total, you've just taken away 768 MB of your theoretical RAM limit. 4 GB (Theoretical limit) - 768 MB (used addresses by components and video card memory) = 3.25 GB of RAM. Systems with 512MBs of vRam have a 3GB limit for memory.
Now consider the slap in the face SLI 8800 GTX's would be to system addressing. They take up 768 MB of vRAM each. So that is 1536 MB of vRAM total. Now you are probably down to something like 2GB of RAM addresses available for the system.
Heh. So the point is, the world NEEDS to get it's butt over to 64-bit sometime soon. Gamers are going to start to feel the burn soon when suddenly they have no more RAM to play with while SLIing.
This applies to both Windows and Linux. 64-Bit doesn't have this limitation. The only ones implying it is a Windows problem is those like Twitter the Troll and Communist Zonk.
Re:Fix the problem by misleading the customer? (Score:3, Informative)
Re:Fix the problem by misleading the customer? (Score:5, Informative)
But Vista requires _signed_ _kernel-mode_ _drivers_. It won't load unsigned drivers, and there's NO user override for this 'feature'. Let me repeat: Microsoft does not allow you to run some types of code on your computer.
You can turn on 'test certificate root' which allows to use self-signed certificate, but it is hard to do for a common user, causes DRMed content to stop playing and displays 'test mode' icon.
Re:Can someone explain... (Score:3, Informative)
Yes it is.
The Linux kernel devs solved this back in 2004 [kerneltrap.org].
Re:Can someone explain... (Score:3, Informative)
It appears you failed to notice that the architectures of AMD's and Intel's multi-core processors are both x86-64. That means that we are upgrading to two 64-bit cores.
Re:Article doesn't say what summary says (Score:4, Informative)
Re:Fix the problem by misleading the customer? (Score:3, Informative)
I'll quote:
Re:Oh just jump to 64bit already MS (Score:3, Informative)
64-bit Windows (both Xp and Vista) does exist, and can in fact run both 32-bit and 64-bit programs; 32-bit software runs just as fast on it as it would on a 32-bit version of Windows.
The problem with 64-bit Windows is twofold: First of all, in general you need 64-bit drivers - which is not an issue for notebook manufacturers generally, although if a customer is installing software or external devices that require drivers or other kernel mode extensions they may find that it won't run under 64-bit Windows. Naturally the notebook makers would be reluctant to annoy their customers, so for the time being they'll probably leave it for their customers to decide if they want to upgrade to 64-bits.
Secondly, 64-bit Windows will not run 16-bit software at all. That includes both Windows 3.1-era software, and DOS mode software. It's true that virtually nobody writes to those standards any more, however there is still a surprising amount of legacy software around that was written to those standards and is still in use. For example, installer programs (especially for older software packages) are often partly written in 16-bit mode, as well as the odd batch file that calls up an old 16-bit utility program to do some bit of cruft. There are even a few older programs that might be run directly by the user that run in 16-bits, mostly for specialized tasks. For a lot of home users this may not be much of an issue (as long as they can run IE, Word, and the latest games many home users will be perfectly happy), but for many businesses this can be a big problem, especially since many medium to large businesses may not even have a complete inventory of what software they use was originally written in 16-bit mode - it never used to matter unless it was locally written and they needed to update it. Additionally, if the software was originally part of a third party package they'd have no reason even to be aware of the fact that it was written in 16-bit mode.
Fortunately, for some time Microsoft has offered a time-limited trial download of 64-bit Windows that does allow you to try it out to see how much of an impact these problems have in your particular case. Obviously if you're a home user or a small business you probably don't want to upgrade your primary system with this or you may find that lots of things unexpectedly stop working - install it in a separate partition or on a test machine instead. Hardware has gotten cheap enough that this is reasonable for almost everyone if your computing needs are such that you're even needing to think about upgrading to 64 bits.
Re:That's great (Score:4, Informative)
The very existence of OSx86 [wikipedia.org] shows that it's not a technical limitation that prevents OS X working on any machine you like.
Re:How can windows suck so much... (Score:4, Informative)
Actually, before you rant about somebody failing at OS knowledge, you should perhaps check your own facts.
He absolutely can use 32-bit drivers in MacOS X 10.5 (Leopard) because Leopard hasn't actually gone *completely* 64-bit.... the kernel is still 32-bit to maintain compatibility with 32-bit drivers. In every other meaningful way though Leopard does count as a 64-bit OS, so you really can have 32-bit drivers on a 64-bit OS.
Ubuntu (Score:4, Informative)
Re:That's great (Score:4, Informative)
For Windows Server, IIRC one of the requirements for MS to sign drivers is PAE compatibility.
Re:Fix the problem by misleading the customer? (Score:2, Informative)
For what it's worth, both 64-bit Linux and 64-bit Vista (and 64-bit Xp for that matter) will run 32-bit software. You don't need to have all your software upgraded to 64-bit mode to benefit - if just a couple of mission-critical applications are upgraded, you may find the advantage compelling if the rest of them continue to work.
The two issues are (1) In general you need new 64-bit drivers, both for Windows and for Linux; and (2) 16-bit mode software will not work in 64-bit Windows. The latter is more of an issue than you might think (consider installer programs that might not get upgraded when the rest of the product is upgraded, or specialized utilities in batch files, or even just the odd special-purpose utility program). If neither of these apply in your case, you can upgrade now without having to wait for all (or even any) of your apps to get converted to 64-bits.
Re:That's great (Score:4, Informative)
And that's all assuming the computer isn't full of crapware and that they don't play any real games.
I've always told people that the quickest and easiest way to see a real speed increase in your computer is to upgrade the RAM, and that's still true today. Anything you add up to around the 3 GB limit where XP falls over is almost guaranteed to improve performance. There is always something being paged out to disk that would probably be happier sitting in RAM. There is always something that could be pre-fetched or cached.
4 gig barrier explanation (Score:2, Informative)
Also note that there's another config option that allows one to change that 3G/1G split for NOHIGHMEM mode, if desired. It's normally hidden, but available if one activates EXPERIMENTAL and I believe EMBEDDED.
That 3-level paging above 4 gig is a bit of a performance hit, as the kernel shifts its 4 gig window around in that 64 gig frame, tho if one runs the sort of apps that actually use that sort of memory, it's less of a performance hit than going to swap would be. Still, going 64-bit Linux isn't such a big deal any more, if your CPU supports it, and it's MUCH more efficient since multiple terabytes of memory can be directly accessed.
There's another factor at play here as well, however, and this applies to ALL OSs on 32-bit x86 and most or all on 64-bit x86 as well. It's a PCI hardware issue more than a software issue. Many old PCI devices were designed for 32-bit only operation, and their hardware can't address memory above the 32-bit 4 gig memory barrier. When memory was running less than a gig, this didn't matter much, and it became customary to reserve the virtual space at the top of the 32-bit address pool, 3.5-4 GB, for PCI device i/o access. As real memory expanded into that area, it runs into the reserved area, and the real memory behind it can't ordinarily be directly accessed.
Folks who've been around for awhile will likely recall a similar issue back at the 1 MB barrier, and how it was resolved using a "memory hole". The same technique is used here. With a BIOS setup to do so, one can configure a "memory hole" at the 3.5-4GB location, and the BIOS will remap the affected memory up above the 4 GB barrier.
This explains the complaints about Apple and MS platforms also having 4 GB look like 3 or 3.5 GB. I'm not sure if their 32-bit kernels can cope with that remapping or not -- they won't be able to if they can't address more than the 4 gigs anyway, but even if they can, the BIOS must be configured to map the hole as well.
Meanwhile, while addressing memory above the 4 gig line shouldn't be a problem for 64-bit kernels, the BIOS must still be able to do the remapping as well -- and the kernel must understand and deal with the hole. 64-bit Linux has suitable config options to do so, but I've not the foggiest how binary platform systems shipping a single binary kernel for all users deals with this. Primarily binary Linux distributions generally ship a number of different kernels, including an enterprise
Re:How can windows suck so much... (Score:4, Informative)
Secondly, leopard's use of a 32bit kernel on intel macs is a bug-bear for me... There was only a very short lived series of 32bit intel macs, which lasted what? less than a year? So now they are limited to compatibility with such a short lived machine, and a future transition to 64bit. They should have used the architecture switch as an opportunity to switch to pure 64bit at the same time. Compatibility wouldn't have been any more of a problem than it already was and it would have set them up for a less bumpy future.
Re:That's great (Score:3, Informative)
Re:Oh just jump to 64bit already MS (Score:2, Informative)
And yeah, DOSbox works fine on windows 64.
Re:That's great (Score:3, Informative)
Re:That's great (Score:2, Informative)
Actually, you have it backwards. The MB, GB and so on are normal SI prefixes, and are units in base 10. The KiB, MiB, GiB etc, however, are in base 2. See for yourself. [nist.gov]
Re:That's great (Score:3, Informative)
Re:Ubuntu (Score:5, Informative)
So why didn't you install 64-bit Ubuntu? Flash works'n'everything in 7.10 64-bit. VMware? They have 64-bit builds. Everything else I run is FOSS. There is no reason not to install it, AFAICT!
Re:Oh just jump to 64bit already MS (Score:4, Informative)
So, to actually make use of a full 64 bit address space, assuming that you would want to go through all memory in less than an hour or so (because if you don't why use RAM?), you would need an SMP type architecture with 512K cores working concurrently on this memory. Given that at 10 Ghz, light can only travel an inch or so, the memory banks should be very close to the CPU's.
But then, 2^67 transistors (the memory banks in bytes), at say a 1 nanometer distance between the transistors (we're now at 45 nm), layed out on a single wafer (2D because the heat needs to dissapate), would have a surface area of a little over 94 acres. So there goes the 10 GHz access speed, and far-away bytes cannot be reached fast enough, needing even more cores to read the damn thing, and more space for these cores.
The difference between past predictions and the current situation, is that we're reaching physical limits, and these are unforgiving. Yes, we might find a need for larger addressable spaces, but it's not going to be RAM, and it's not going to be serial CPU's accessing them.
Sometimes you just *can't* use 4GB: (Score:2, Informative)
http://blogs.msdn.com/oldnewthing/archive/2006/08/14/699521.aspx [msdn.com]