Demystifying UEFI, the Overdue BIOS Replacement 379
An anonymous reader writes "After more than 30 years of unerring and yet surprising supremacy, BIOS is taking its final bows. Taking its place is UEFI, a specification that begun its life as the Intel Boot Initiative way back in 1998 when BIOS's antiquated limitations were hampering systems built with Intel's Itanium processors. UEFI, as the article explains, is a complete re-imagining of a computer boot environment, and as such it has almost no similarities to the PC BIOS that it replaces."
Re:Slashdot (Score:5, Informative)
Re:Slashdot (Score:5, Informative)
Re:Slashdot (Score:4, Informative)
It seems that EFI may not be the brilliant thing that it is supposed to be. Somebody doing a lot of work involving it blogs here - http://mjg59.dreamwidth.org/ [dreamwidth.org] - and there are lots of depressing things to read there. To quote from the page:
> It's an awful thing and I've lost far too much of my life to it. It complicates the process of booting for no real benefit to the OS. The only real advantage we've seen so far is that we can configure boot devices in a vaguely vendor-neutral manner without having to care about BIOS drive numbers. Woo.
Re:I don't know... (Score:5, Informative)
BIOS has a LOT of limitations. >2TB hard drives, network boot, disk controllers, GPU's, IPMI, ... everything has to subvert the BIOS in some way which makes it mightily slow. My iMac boots with Lion in 7 seconds. My Linux machine takes 15 seconds just getting to Grub, my servers take up to 45 seconds to get to the boot loader.
BIOS is ALWAYS hooked into 8086 mode (real mode) so at boot time you are limited by it's calls (such as 13h for disks) and that's hard and expensive to emulate on a non-x86 system (such as most Intel/AMD processors).
UEFI is good. (Score:4, Informative)
Secure boot is bad. What is mysterious about that? If you want to understand more, related to booting Linux, read these. UEFI secure booting [dreamwidth.org] x86 EFI boot stub [lwn.net]
Matthew Garrett explains secure boot implications (Score:4, Informative)
Re:Slashdot (Score:5, Informative)
OSX uses GPT partition maps on x86 machines, they only had their own partition map on PPC systems. Current OSX running on x86 macs can still read disks which use the PPC partition map (as can linux), but can't boot from them.
Linux has supported EFI for a long time, and Intel have been pushing EFI for a long time.... We would have had EFI many years ago, only MS never bothered to support it until very recently.
UEFI In The Data Center Not Ready For Prime Time (Score:3, Informative)
I've been dealing with UEFI-based servers for the past couple of years - IBM System x specifically - and while I see the potential for UEFI, it's still got a lot of teething pains in the Enterprise space as far as I am concerned. IBM was the first to basically put their entire x86 product line on UEFI-only hardware.
However, I have actually encountered machine configurations that BIOS was unable to deal with (add-in PCIe cards utilizing all of the ROM memory space and bringing the machine to a halt, amount of RAM beyond what BIOS can handle natively, etc...) so I can see the requirement for a BIOS replacement.
In its current incarnation in the servers I deal with, the architecture is essentially booting two full-blown microprocessors running code *BEFORE* the machine will even attempt to POST. The service processors in the current IBM machines (IMM - Integrated Management Module) are the first thing to fire up when power is applied to the server - since IMMs are small microprocessors in their own right (can't remember the make, but I remember hearing 100MHz speeds) loading what I believe is a micro-Linux kernel it takes time for these things to fire up. This process can take up to two (sometimes more) minutes before the power button stops blinking rapidly and goes to a normal "power off" blink. At this point you can turn the server on, which is when it will fire up the UEFI microprocessor and begin to load all of that code into the system. UEFI goes and "talks" to all of the internal hardware, loads profiles for devices, etc... during this phase. That can take up to another four minutes or so (it has gotten faster over the last two years) at which point the actual POST screen will display and you can either enter SETUP or allow the server to boot - note that add-in cards will have to load their own ROM as normal (if in Legacy Mode, which most of our server are due to OS limitations). Note that the more cards you put in a machine and more boot options you leave enabled, the longer this pre-POST initialization takes. I've seen reboot cycle times of over ten minutes in some instances, whereas the BIOS-based systems would complete that cycle in under two minutes.
So here's a brief summary of the current state-of-the-art in server UEFI:
PROS:
* Allows configuration of peripheral devices from the SETUP screen.
* Allows up to 1TB (much smaller in practive) of Option ROM space for add-in cards.
* Allows for huge amounts of memory, and very large disk sizes.
* In theory, allows for additional software to execute before the primary OS kicks in. Not really utilized in these machines.
CONS:
* Horribly slow boot cycles. Length of boot cycle dependent on amount of hardware in server. Had an IBM ATS Engineer tell me they had a machine in the lab that they plugged so much stuff into that it took 23 hours to POST.
* Corrupt firmware or firmware updates is the kiss of death for many of these machines. While there are backup firmware spaces and the appropriate jumpers to recover, this does not always work as intended. We've had quite a few brand-new systems that had to have complete system planar replacements because the code wasn't executing right.
* As these are actual mini-OSes running there are all kinds of strange quirks and odd behavior from the servers. Lots of troubleshooting involves resetting the service processors and praying they reboot properly in order to just get the server to POST normally.
* Speaking of quirks, there are lots of situations where hardware failures are either false-positive failures or not indicated as an issue when they actually have faults. Troubleshooting on these machines becomes guesswork based on intuition rather than having a solid grip on what component is doing what.
* Example: As the UEFI handles all of the components on the server, we have run into issues where bad code for the UEFI causes the Operating Systems to malfunction in strange ways, only to find the OS was reacting to thousands of repeated error messages being
Re:Slashdot (Score:3, Informative)
Re:There are limits to how fast an HDD spins up (Score:4, Informative)
- Memory test. Well, you could avoid it, but you shouldn't.
- Hard drive spin-up. You can detect the drives before this, but you can't read the partition table.
- USB device detection. You need this for keyboards and bootable USB devices. And with the increasing use of tablet form factors, possibly in future for touchscreens.
- Storage peripherals. A lot of storage controllers, espicially those of a RAIDy or networky nature (hardware-supported iSCSI, fibre channel) will need their own time to ready themselves and check connectivity and device integrity.
Add all those together, and you're up to about what it takes for the BIOS today to run POST and hand over to the bootloader.