Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
Power Software Hardware Linux

PC Makers Try To Pinch Seconds From Their Boot Times 399

Posted by timothy
from the an-operating-system-called-linux dept.
Some computers are never turned off, or at least rarely see any state less active than "standby," but others (for power savings or other reasons) need rebooting — daily, or even more often. The New York Times is running a short article which says that it's not just a few makers like Asus who are trying to take away some of the pain of waiting for computers, especially laptops, to boot up. While it's always been a minor annoyance to wait while a computer slowly grinds itself to readiness, "the agitation seems more intense than in the pre-Internet days," and manufacturers are actively trying to cut that wait down to a more bearable length. How bearable? A "very good system is one that boots in under 15 seconds," according to a Microsoft blog cited, and an HP source names an 18-month goal of 20-30 seconds.
This discussion has been archived. No new comments can be posted.

PC Makers Try To Pinch Seconds From Their Boot Times

Comments Filter:
  • by Anonymous Coward on Sunday October 26, 2008 @02:30AM (#25515529)

    On a K6-II 350, BeOS would go from POST to booted and ready to rock in under 5 seconds. Faster boot times are possible but doing so may require some big changes to how everything works.

  • by theblondebrunette (1315661) on Sunday October 26, 2008 @02:33AM (#25515543)

    Standy on desktop doesn't waste that much electricity (10-15Watt) compared to a power off mode (5Watt). With the newer power supplys, for the past 10 or so years, a powered off computer still consumes power as it needs to keep that power on/off button hot (12v or 5v, not sure). The older power supplies, the power button was a true 110/220V switch. To achieve that now, you have to use the switch in the back where the power supply is..

  • 3 stages to tackle.. (Score:5, Informative)

    by cheros (223479) on Sunday October 26, 2008 @02:48AM (#25515601)

    Yup, it has always irritated me that the faster my system gets the more I need to wait for it..

    There are IMHO 3 levels to this:

    1) BIOS boot. Why the hell do I need to wait for this? I don't need the advertising, thanks, and a state check is BS if it worked before - flag and repeat. The maximum allowed delay should be to show a 2 sec message "Press F1 to enter BIOS or re-scan" - and even that one should be able to switch off. I recall reading something about an Open Source BIOS having to be slowed down because it was ready before the disks had spun up - yes please!

    2) OS boot. The actual core OS is again something that, once stable, changes very little. Or so goes the theory, with the incredible amount of patching going on in Windows there is indeed a need for re-scan. But that again is something you do once, then skip the proooooooooooooooooobing for something that *may* be there but doesn't respond in teh half century timeout that it has been given. I can recall something called TurboDOS for the Apple ][ that was a good 3x faster, mainly because someone had brought the timeouts back to something sane.. What I find particularly offensive is the Microsoft marketing department forcing a visible desktop that makes it appear the machine is ready, where any enterprise build will take more than it takes to get a coffee before it is finally really is, even after defragging the disk. That's at least something I find less of an issue with Linux. However, these days there is an awful lot of crap that has to be loaded for no apparent reason - maybe time to lift the covers and go back to basics?

    On the Linux front an observation aside: once upon a time, Linux booted in seconds even when the then Worries for Workgroups was already starting to get obese. This speed advantage no longer exists other than that a ready desktop really IS ready :-(

    3) App level boot. Once the OS is live, all these other gadgets become alive. There is a whole raft of things that sit and watch for events these days, and most of it does so surreptitiously. Picasa shows a logo and tells you it's watching for events, but the iTunes crap hides, ditto for the Apple update. Once upon a time you could look in Windows "startup" and look at what actually loaded, but that was obviously too visible and useful and could -oh shudder- allow the customer to kill off the things they didn't want. These days, only Logitech and OpenOffice do it as intended, the rest all sits under the radar - motives?

    ANY program setting up some form of monitoring should be visible, and offer the advanced user a way to kill it off. I want iTunes only to play music, and I will start it up myself hen I need it to sync - that is a choice I should be able to make. Sure, make it idiot proof but for God's sake leave an option for the non-idiots to control it (and bloody stop trying to shove Safai down my throat with every down, sorry, 'up'grade). And I don't recall ever giving permission for the Apple Update program so where did that come from? I think that is in principle a breach of computing laws to install software without authorisation..

    There are so many apps that start up a background process for updates that it's a miracle there's bandwidth left for getting any work done, and starting an app starts off some more. Apple iTunes, Firefox -and each extension thereof-, Thunderbird -ditto-) - the moment you start them the hunt for updates begins. "Stable" has been replaced by "perpertual beta" - and we know who started that (yes Redmond, it's you). I can recall where especially an OS patch was A Big Deal. The fact that someone does this monthly (and now doesn't) should not blind you to the fact that it once was an exceptional event rather than rule.

    And then there is the way network events are treated: synchronous. Start Outlook and watch the system die while it waits for some sign of life from the server (and then continues this throughout the day). Watch a DNS lookup freeze a system because the netwo

  • Re:Startup Programs (Score:2, Informative)

    by DemonThing (745994) <demonthing&gmail,com> on Sunday October 26, 2008 @03:21AM (#25515711)
    Spybot-S&D [safer-networking.org] does come with a program called TeaTimer [safer-networking.org] (yes, another startup program, but it's small) that monitors registry changes including startup entries, popping up a dialog asking whether to allow the change, so if a program decides it wants to run at startup, you can block that right there.
  • by Anonymous Coward on Sunday October 26, 2008 @03:24AM (#25515729)

    The memory check is still useful, and should be available as an option in the bios. The rest of the POST should be scrapped. No OS uses any of the crap a 25 year old piece of software tries (and fails) to deliver. Yes the hard disk has more of everything than it can detect. Yes there are more ports than it can count. Every modern OS overwrites the 64k of low memory with itself (but X86 processors must first start in 8086 mode, load itself (long jump) into higher but unreadable memory, then switch modes to emm386 mode and then recoup the lower 640k. I know it all sounds old fashioned, but the latest version of Linux has to do it with intel processors, and since microsoft xp/vista run on the same architecture, they have to as well (or lose all that memory). Its not the OS, its the architecture, and the bios. Changing the old bios with something new would help boot times a lot!

  • by RAMMS+EIN (578166) on Sunday October 26, 2008 @04:28AM (#25515941) Homepage Journal

    ``How is it that the more power we get, the -longer- this takes? And why is it that the solution always involves hardware makers? Maybe we need to look at how our operating systems are constructed instead of blaming the hardware itself.''

    The time it takes to start up a computer is mostly determined by the firmware. Once the software has control of the system, you can boot an OS very quickly (Linux in a few seconds). But before the software gets to run, the firmware has control of the system. I've seen computers where the firmware would perform initialization and self tests for several minutes. If you were to replace the firmware with something else (e.g. coreboot [coreboot.org]), you could go from power up to ready to use in a few seconds. But it's usually the hardware makers who decide what firmware to ship. And that's why it's up to them to improve things.

  • by riscthis (597073) on Sunday October 26, 2008 @04:47AM (#25516027)

    Close it when I'm done, it just goes to sleep. Open it when I need a quick weather map, it takes but 2 seconds to connect and fetch the map, then just close it. And it always works just like that.

    Let's see Vista do that! [...]

    Not that I usually go out of my way to defend Vista, but the Dell Vostro 1500 running Vista SP1 that I'm typing this on does exactly what you describe.

    Apart from security updates - which occur usually once a month - it never gets rebooted (and reboots do take longer than I'd prefer, but have never timed it), and I always just use Vista sleep in-between sessions. It's pretty much ready as soon as I finish opening the lid, and I'm happy with that as an instant-on.

  • Re:So... (Score:4, Informative)

    by Peet42 (904274) <Peet42@Netsca[ ]net ['pe.' in gap]> on Sunday October 26, 2008 @05:15AM (#25516159)

    http://www.pcdecrapifier.com/ [pcdecrapifier.com]

  • by Detritus (11846) on Sunday October 26, 2008 @05:26AM (#25516203) Homepage
    The problem is MPEG-2. Even if everything else works instantly, the TV has to wait for a reference frame before it can begin to decode video. With analog, you just wait for the vertical sync pulse (60 per second) and go.
  • by Bert64 (520050) <bert@s[ ]hdot.fi ... m ['las' in gap]> on Sunday October 26, 2008 @05:50AM (#25516283) Homepage

    It's quite easy to recompile linux so it only has support for the hardware you have, it can be made to boot considerably quicker when you do this...

  • by kasperd (592156) on Sunday October 26, 2008 @06:47AM (#25516531) Homepage Journal

    the TV has to wait for a reference frame before it can begin to decode video.

    Even that I can imagine being solved if you have sufficient processing power in the TV. How often is a key frame sent? Every five seconds? If the TV would figure out which 10 channels you are most likely to zap to next and store the last five seconds of compressed video for each of them, switching channels would just boil down to how quickly you could decode those five seconds of video.

    Look at what happened with teletext. With early televisions supporting it, you would have to wait 10s of seconds for a page to show up. Today they show up instantly. After all it would only take a few MB of memory to store every page as it was sent over the air and keep it just in case you wanted to see it. A few MB of memory was a lot when teletext was invented, today it is nothing. Buffering MPEG streams requires a few orders of magnitude more memory, but other than that it is pretty much the same.

  • by Mikaelk (32020) on Sunday October 26, 2008 @07:42AM (#25516783) Homepage Journal

    Or 4 tuners (enough for dvb-t here in sweden) and keep the last keyframe for all channels.

    Other cool stuff to do with all channels tuned: show a PIP overview with many channels, slide the picture left of right when zapping and show both channels, record everything to a 500G harddrive and have a 24h timeshift of all channels

    If a mythtv wizard reads this, please implement it.

  • by Yvan256 (722131) on Sunday October 26, 2008 @08:34AM (#25517041) Homepage Journal

    Win98SE FTW!

  • by shess (31691) on Sunday October 26, 2008 @09:25AM (#25517337) Homepage

    I usually config things in the fastest boot mode, but when I need to make changes (and thus watch boot screens and stuff), I temporarily config to a slower mode. So instead of an uber-menu, you would just have to work your way through each BIOS saying "Set slow boot, reboot" until you got to the one indicated.

    Annoying, but, what, how much time do you spend in the BIOS compared to using the machine?

  • by Silver Gryphon (928672) on Sunday October 26, 2008 @11:30AM (#25518125)

    "The tin-foil hatters will think that M$ is doing this on purpose so people will feel compelled to upgrade more frequently, but I don't really give them that much conniving intelligence."

    Oh, it gets better... check this out.

    MSDN Magazine, October 2008, page 150, "End Bracket"

    Josh Phillips, a Program Manager on the MS Parallel Computing Platform Team, actually is advocating wasting CPU cycles. As in, if you have multiple sources of data, go ahead and fetch two or three and just use the first one that comes back. Pre-apply filters to images even if they're not requested, etc.

    That's great and all, but that kind of predictive computing has to be done cautiously, with a fully loaded system in mind. I can say my app is the only one on the box (i.e. SQL Server, MSMQ, etc) and just expect all four cores to be mine. But when my app is working alongside 150 other little modules and apps, we still only have four cores serving everyone. That's probably why Vista runs so rough on my single core Athlon XP 2800 but is beautiful on a low-end dual core system. The OS has built-in expectations for multiple cores dedicated to its own tasks.

    I wish apps would consider their environment like we do traffic while driving -- if we see a 50 car pileup in front of us, do we just plow through them? No, that's called demolition derby. While fun, not very efficient. Newer apps should consider their work in context, and have some way to tell the OS that if the Disk Read Time % for spindle 0 is at 3600%, something might want to scale back its workload. As it is, most apps consider their I/O of equal, "Normal" priority.

    And XP/Vista will present a login screen within 30 seconds because of performance promises (from marketing, probably)... but the OS knows in advance (prefetch logs) it has to read 10GB of files to finish the boot cycle. Anything after that login screen should have a priority flag set to "below normal" because if it isn't important enough to delay the login screen, it can afford to wait an extra two minutes. There is a "delayed startup" mode, but I can't see enough improvement ... the stupid thing just waits until I'm halfway through downloading my email to grab its "equal share".

  • by afidel (530433) on Monday October 27, 2008 @01:24AM (#25523889)
    Actually they already have, it's called EFI and like OpenFirmware it allows you to use a single keycode to get into a shell environment where you can program device settings in a standardized way. I'm not sure if the spec calls for the elimination of other keycode entry methods but I would think that eventually the others would die out since they are an unnecessaru expense to code.

Loose bits sink chips.

Working...