Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Hardware Hacking Windows Build Linux

CoreBoot (LinuxBIOS) Can Boot Windows 7 Beta 207

billybob2 writes "CoreBoot (formerly LinuxBIOS), the free and open source BIOS replacement, can now boot Windows 7 Beta. Videos and screenshots of this demonstration, which was performed on an ASUS M2V-MX SE motherboard equipped with a 2GHz AMD Sempron CPU, can be viewed on the CoreBoot website. AMD engineers have also been submitting code to allow CoreBoot to run on the company's latest chipsets, such as the RS690 and 780G."
This discussion has been archived. No new comments can be posted.

CoreBoot (LinuxBIOS) Can Boot Windows 7 Beta

Comments Filter:
  • by eldavojohn ( 898314 ) * <eldavojohnNO@SPAMgmail.com> on Tuesday January 27, 2009 @10:45AM (#26623053) Journal
    On CoreBoot's benefits page, it lists:

    Written in C, contains virtually no assembly code

    What is the benefit of writing a BIOS in C over assembly code? Is it for transparency? Easier to catch bugs? Does compiling from C to machine assembly protect you from obvious errors in assembly? Is it for reusability of procedures, modules & packages?

    Oftentimes I have wished I knew more assembly so I could rewrite often used or expensive procedures to fit the target machine and try to optimize it. I don't know assembly well, however, and therefore don't mess with this. Doesn't handwritten assembly have the potential to be much faster than assembly compiled from C? I thought often run pieces of the Linux kernel were being rewritten into common architecture assembly languages because of this?

    I'm confused why mainboard companies don't write their BIOS in C if this is an obvious benefit--or is it that they do and all we ever get to see of it is the assembly that results from it?

    Can anyone more knowledgeable in this department answer these questions?

    • by sprag ( 38460 ) on Tuesday January 27, 2009 @11:02AM (#26623389)

      Being in C, it is easier to see what the person writing it was doing, compared to assembly.

      Consider if you had to do some nasty computation such as finding what address is used for a given row and column on the screen:
      (in bad assembly)

      mov ax, row
      mov bx, col
      shl col,#1
      xor dx,dx
      mul ax,#80
      add ax,bx
      mov pos,ax

      Whereas in C it is:

      pos=(row*80)+(col*2);

      and much more readable.

      • Re: (Score:3, Funny)

        by sprag ( 38460 )

        Those should have been multiplies by 160 instead of 80.

        • by moteyalpha ( 1228680 ) * on Tuesday January 27, 2009 @11:35AM (#26623977) Homepage Journal

          Those should have been multiplies by 160 instead of 80.

          I would have thought that immediately ( 160 ) since I did so much assembly with CGA, however I may be even older and some of the displays were 40 characters wide, so 80 would be correct for that. On the issues of coreboot, that is fantastic and I want it for my machine now. I want instant boot to linux and ext4 for my next upgrade. On the other issue of _asm_ as faster, I bet I could make some of it faster, but gcc is way good anymore and I often objdump my "c" code to look at the assembly and the people who write the compiler are virtually magicians with that code. I have tried competing with the compiler and it is a waste of time for most things and unless I was doing firmware or a device driver, I wouldn't even consider assembly. As far as the code, the one thing I wouldn't do is a "mul" just for the cycle cost, I would combine shifts and adds to get (16x+64x).

    • I thought C compilers had gotten to the point where C was just a convenient syntax for assembly anymore?

      I'm only half-kidding here. I'm sure the main reason is for portability across different chipsets, as well as ease of debugging. But, as I said, I think a lot of current C compilers can generate code that's not appreciably larger than hand-written assembly.

      Compiler writers, please educate me otherwise.

      C

    • Doesn't handwritten assembly have the potential to be much faster than assembly compiled from C?

      Short answer: no.

      Long answer: rarely. Optimizing compilers are so good these days that very few humans would be capable of writing better assembler, and I contend that no humans are capable of maintaining and updating such highly-tuned code.

      Embedded assembler makes a lot of sense when you're embedding small snippets inside inner loops of computationally expensive function. Outside that one specific case (and disregarding embedded development on tiny systems), there's not much need to mess with it. Note that need is not the same as reason. Learning assembler is good and valuable on its own, even if there are few practical applications for it. If nothing else, it'll cause you to write better C.

      • by agbinfo ( 186523 ) on Tuesday January 27, 2009 @12:14PM (#26624687) Journal

        Doesn't handwritten assembly have the potential to be much faster than assembly compiled from C?

        Short answer: no.

        Long answer: ...

        I don't know how good compilers have become but I've had to optimize generated code (for space and speed) a long time ago.

        To do this, I would write the best possible code in C first, then compile it and then optimize the generated assembler code.

        My point is that if you already start with the best code the compiler will provide, you can only improve from there.

        Also, in some situations, looking at the generated assembler code helped identify clues as to how writing the original C code could result in better performance.

        This was a long time ago, for an embedded application with very limited CPU and memory. I haven't had to do that since.

        • by dargaud ( 518470 )

          I've had to optimize generated code (for space and speed) a long time ago

          I did too. But how can you write assembler for a 31 stage pipeline by hand ? Or out of order instructions ? I'm pretty sure that's completely impossible, unless you insert 30 NOPs between each instructions, and by that time it's far from being optimized anymore !

          • I did too. But how can you write assembler for a 31 stage pipeline by hand ? Or out of order instructions ? I'm pretty sure that's completely impossible, unless you insert 30 NOPs between each instructions, and by that time it's far from being optimized anymore !

            Out of order execution is something to compensate for suboptimal instruction ordering, so I'm not sure why you think it would make it harder to write assembly for an OOO chip. You just worry slightly less about sequencing. As for deep pipelines, y

      • You forgot one thing (Score:2, Informative)

        by Anonymous Coward

        Specialized instructions (MMX, SSE, etc) can provide substantial speed boosts with certain code. Unfortunately no C compiler really takes full advantage of those features (if at all) despite them being widely available nowadays.

        So in those cases it may be a whole lot faster to use assembly. Usually this is just embedded within a C function because of the specialized nature.

    • by .tom. ( 25103 ) on Tuesday January 27, 2009 @11:06AM (#26623471)

      Easier to maintain, more portable accross platforms, easier to do more complex stuff, easier to integrate/reuse existing librairies/code, etc.... ?

    • by Anonymous Coward on Tuesday January 27, 2009 @11:13AM (#26623617)

      Turns out that the amount of bugs in a given amount of lines of code is fairly constant, regardless of language. Thus, it takes fewer lines in C code = fewer bugs.

      Also, it is extremely rare that the compiler cannot emit more optimal code than what is hand-written - compilers are extremely good at optimizing these days. The more common trend is to provide hints & use intrinsics so that you get all the benefits of writing in C code (type checking, more readable code), but the compiler is better able to generate the assembly you want.

      You will almost never write better assembly than what the compiler outputs - remember, the compiler takes a "whole program" approach in that it makes optimizations across a larger section of code so that everything is fast. It is highly unlikely that you will be able to match this - your micro-optimization is more likely to slow things down.

      There is actually very little in the Linux kernel that is written in assembly (relatively compared to the amount of C code) - the only time it is, is because it is the only way of doing it to support multiple architectures, not performance. For performance, by far, the kernel code is written in C and relies on working with the compiler people to make sure that the code is optimal.

      • I always wondered why people thought they were so cool...but if you can equate bugs to 'lines of code' I have a feeling they are pretty efficient. ;)

        • by Chabo ( 880571 )

          One of my favorite sections from the book "Learning Perl":

          Now, we're not saying that Perl has a lot of bugs. But it's a program, and every program has at least one bug. Programmers also know that every program has at least one line of unnecessary source code. By combining these two rules and using logical induction, it's a simple matter to prove that any program could be reduced to a single line of code with a bug.

          • Re: (Score:2, Informative)

            by Anonymous Coward

            ... every program could then be further reduced to the single empty program, which would still contain at least one bug.

            Which in turn means that all programs are the same, thus - assuming deterministic execution - all exhibit the same behaviour.

            Or in other words: Linux is Windows.

      • Turns out that the amount of bugs in a given amount of lines of code is fairly constant, regardless of language. Thus, it takes fewer lines in C code = fewer bugs.

        Ahh, I envision the "God" language...

        Do_What_I_Want();

      • You have some insight, but not into how compilers work, nor how good programmers improve code. It has been a very long time since I wrote any C, but it was writing much of a (non-optimizing) C compiler. It must be said though that one reason I didn't finish it was looking at all the cool ways to optimise :)

        If I were writing/designing a BIOS (which I must admit I am glad I am not) I would also pick C as the "best" language for the job. I'd then write the cleanest possible implementation of the design, using

    • by Anonymous Coward on Tuesday January 27, 2009 @11:19AM (#26623723)

      It's easier to write structured programs in C than assembly.

      Well, it's much easier to write anything in C than assembly, but assembly lends itself to small pieces of self-contained code that do one thing only.

      The idea is that assembly is only used where is needs to be, because you have to do something that you can't do in C, such as fiddling around with the CPU's internal state. The rest is written as a collection of modules in C. To build a BIOS for a particular board, you just link the required modules together.

      That suggests the question "why not write the BIOS in C++, or Java, or whatever". Anything higher-level than C tends to require more complex runtime environments (which are usually written in C), while C requires nothing more than assembly. It's the highest level language commonly available that can run with absolutely no OS support at all.

    • by Thaelon ( 250687 )

      What is the benefit of writing a BIOS in C over assembly code?

      Maintainability. Further, it's fairly common knowledge that C compilers these days can often produce code that is more efficient than hand written assembly, so it's a no-brainer to write C instead.

    • by salimma ( 115327 ) on Tuesday January 27, 2009 @11:43AM (#26624103) Homepage Journal

      Doesn't handwritten assembly have the potential to be much faster than assembly compiled from C?

      For a piece of software that gets run once per boot, speed is probably not very critical. A typical BIOS completes its run in a couple of seconds.

      Using an optimizing C compiler also has a further potential benefit -- given that motherboards specifically target certain CPUs, you can optimize the BIOS code for that CPU family. Not sure how much improvement this will yield, though.

    • by FrankSchwab ( 675585 ) on Tuesday January 27, 2009 @12:30PM (#26625027) Journal

      Writing in 'C' is an order of magnitude faster than writing in assembler; if you're building a system with 10 man-years of coding in it, that becomes really, really important.

      Imagine writing a host-side USB stack in assembler; a BIOS has to have that. Or writing an Ethernet driver and TCP/IP stack in assembler. Or any of the other large subsystems of a BIOS; the task would be daunting to me, a 20 year veteran of embedded systems (yes, my 'C' and Assembly mojo is strong).

      Assembler has proven its worth when sprinkled through embedded systems. When profiling finds the routines that are bottlenecks for time-critical functions, a good assembly programmer can often speed up the 'C' code by a factor of 2 to 10. But, this generally involves very small chunks of code - 10 to 50 lines of assembly.

      In most real systems, the vast majority of the code is executed rarely, and rarely has a performance impact. For example, on a modern dual-core, 2 GHz processor with a GB of RAM, the code used to the display the BIOS setup UI and handle user input will execute faster than human percepption in almost any language you could imagine (say, a PERL interpreter written in VB which generates interpreted LISP). There is no reason in the world to try to optimize performance here. Even in things like Disk I/O, the BIOS' job is mostly to boot the OS, then get the hell out of the way.

      • Re: (Score:3, Interesting)

        by billcopc ( 196330 )

        A BIOS actually does not have that much to do, by definition. The problem is that PC architecture is crusty and hasn't evolved all that much since the 80's. It should not be the BIOS' job to handle USB/Ethernet or any other hardware niggling, such feats belong in the hardware's own controller. If each component did its job and presented a uniform, reliable interface to the BIOS, we could be writing very simple BIOSes that glue it all together and give us a simple UI to configure the pre-boot stage. That

    • AMD engineers have also been submitting code to allow CoreBoot to run on the company's latest chipsets, such as the RS690 and 780G."

      Now that would freakin' rock!!!

      Until now, CoreBoot has been really hampered by the fact that it has mostly been supported on server boards [coreboot.org], with little to know support on Desktop and Laptop chipsets. This is mostly the fault of the chipset/mobo manufacturers, who have zealously guarded their legacy BIOS crap for reasons that are pretty unfathomable to me.

      I would love to be able to run CoreBoot on my Desktop and laptops. It would help to fix soooo many of the legacy BIOS issues that people tear their hair

  • This is awesome (Score:4, Insightful)

    by kcbanner ( 929309 ) on Tuesday January 27, 2009 @10:48AM (#26623113) Homepage Journal
    I'd really like to see the buggy vendor bioses get the boot and be replaced by this. The BIOS on my motherboard has all sorts of quirks, like missing one stick of my ram during detection randomly, to really laggy page switches. Windows support is what CoreBoot needs to get accepted.
    • Re:This is awesome (Score:4, Insightful)

      by richlv ( 778496 ) on Tuesday January 27, 2009 @10:54AM (#26623215)

      some fully supported desktop mobos is what coreboot needs ;)
      if a mobo was fully supported, that would be a huge plus when i'd choose.
      we've seen a lot of issues where even if a bios isn't massively buggy by itself, future development of hardware leaves a lot to be desired, but vendor has dropped any support. this includes servers by ibm, hp, desktop boards...
      problems have been various during the years (and i really mean only the problems that can be fixed in bios) - larger disk drive support, larger memory support, proper booting from hdd (for example, ibm netfinity 5000 stops booting when an ide drive is attached), proper booting from all cdroms, usb booting...
      so, amd, if your products will be fully supported by - or even shipped with - coreboot before everybody else, it is very likely that my future purchases will go to you :)

      • Re:This is awesome (Score:4, Informative)

        by jd ( 1658 ) <imipakNO@SPAMyahoo.com> on Tuesday January 27, 2009 @03:01PM (#26627773) Homepage Journal

        Each time I do a Coreboot/LinuxBIOS announcement on Freshmeat, I usually add a whole bunch of chipsets and a fair dollop of motherboards. I don't, as a rule, state the level of completeness, simply because there's barely enough space to list just the components.

        Having said that, assume the web page is out-of-date when it comes to fully-supported motherboards. I know for a fact that I've seen a lot more motherboards get listed as complete in the changelog than are listed on the website, even though I started tracking those changes relatively recently and they'd plenty of mobos complete even then.

        One of the important things to remember about LinuxBIOS/Coreboot (the new name doesn't have the same ring to it, for me) is that it's a highly modular bootstrap, so it has a high probability of working on just about anything, so long as the components you need are listed and ready. I feel certain that a few good QA guys with a bit of backing from mobo suppliers could pre-qualify a huge number of possible configurations. The developers, as with most projects, don't have time to validate, debug and extend, and their choice has (wisely) been to put a lot of emphasis on the debugging and extending.

        Of course, Coreboot isn't even the only player in the game. OpenBIOS is out there. That project is evolving a lot more slowly, and seems to have suffered bit-rot on the Forth engine, but that's a damn good piece of code and it deserves much more attention than it is getting.

        Intel also Open Sourced the Tiano BIOS code, but as far as I know, the sum total of interest in that has been zero. I've not seen a single Open Source project use it, I don't recall seeing Intel ever release a patch for it. That's a pity, as there's a lot of interesting code there with a lot of interesting ideas. I'd like to see something done with that code, or at the very least an assessment of what is there.

      • by LoRdTAW ( 99712 )

        Its not AMD's decision to say whether or not Coreboot is used in place of AMI/Award/Phoenix/etc but motherboard makers themselves. Coreboot has to be stable and fully support all the chipsets, CPU's, hardware and operating systems attached to that board. And on top of that it has to have a full tool kit to enable the maker to easily customize the bios for their exact board configuration.

        But I will add that I too have been eagerly awaiting a free open source BIOS that will ship on mainstream boards. Problems

    • Re: (Score:3, Insightful)

      by hedwards ( 940851 )

      Well, there's two issues there. One is that Vendors haven't cared a lot about getting it right, and two that the BIOS itself as a specification is pretty limited.

      Replacing the BIOS with EFI or something more up to date and extensible could potentially solve the second problem.

      But, ultimately vendors are lazy and tend not to bother doing it right. More often than not they just use a stock BIOS which is itself buggy. Really it's probably the BIOS manufacturers that ought to be taken to task for screwing it up

    • by theJML ( 911853 )

      Wow. Sucks to be your board. I don't think I've ever had any big problems with BIOSes on desktop boards (or even on server boards passed the "a few updates after public release" point. My current setup doesn't have ANY that I know of, or care about... and newer versions of the BIOS simply add support for newer procs/stepping.

      Now on server boards, I've only really had problems with boards from Tyan. Though mostly it was because they mislabeled things or couldn't spell ("CPU1 FAN DOESN'T DETECTED" comes to mi

    • Re: (Score:3, Interesting)

      by dk90406 ( 797452 )
      So with windows support, CoreBoot can be accepted? Leads to the question: Is anyone here using it now?

      What is your experience.

      I would be terrified by the risk of harming my MOBO, but I may be the only one so timid.

      • by jd ( 1658 )

        Reprogramming the BIOS is not a good idea unless you've some method of recovery. This is true whether you are timid or brave. CoreBoot goes through a lot of bugfixes each day, every day, and there's no telling that tomorrow's patch might relate to a problem with your hardware.

        If there's a way to flash your BIOS externally, such as via JTAG, your number one concern should be to get the hardware you need. Dump the contents of the flash to some backup storage (that you can access without a working flash), then

  • Excuse my ignorance but is it already possible to have a fully working computer that doesn't perform a single unknown operation?

    • Open cores (Score:3, Informative)

      by tepples ( 727027 )

      Excuse my ignorance but is it already possible to have a fully working computer that doesn't perform a single unknown operation?

      Possible? Yes. Feasible for an enthusiast? Not in the first quarter of 2009. Intel and AMD CPUs contain secret microcode. There exist Free CPU cores such as the MIPS-compatible Plasma [opencores.org], but as far as I know, none are commercially fabricated in significant quantities.

  • I've been buying Intel because they support their 3D graphics with open source code really well under Linux, unlike AMD/ATI.
    But Coreboot says support AMD, because AMD helps them run on AMD chipsets, unlike Intel.

    Help!

    • by KasperMeerts ( 1305097 ) on Tuesday January 27, 2009 @11:08AM (#26623521)
      What's more important to you? OSS graphics drivers or OSS BIOS? And by the way, if you need a decent graphics card, you're gonna need ATI or nVidia anyways, Intel doesn't make really high performance cards.
      • by 77Punker ( 673758 ) <(spencr04) (at) (highpoint.edu)> on Tuesday January 27, 2009 @11:15AM (#26623643)

        Also, ATI has open source 2D drivers and just yesterday released specs that should allow for good open source 3D drivers. Sometime in the next 6 months, their graphics cards should support OpenCL, too. ATI is the way to go for open hardware support at the moment.

      • I beg to differ, if you want a stable system capable of running well, consuming power, suspending, doing composting, flash, etc, intel are the only choice. Unless you play recent games intel really kick the pants out of ati/nividia for stability (the same can be said for windows drivers tbh)

    • Wait for Larrabee, which should be available as a discrete card. Buy AMD motherboards and CPUs, and Intel video cards.

      Or wait longer, for the open source ATI drivers to start working -- most of the specs have been released.

    • I have an ATI 4850, it doesn't run all that well in Linux but I can guarantee that it will still run better than whatever Intel is trying to pass off as a graphic card.
  • Why is it "news" that it can launch Windows 7?

    I guess what I am struggling to understand is why the news isn't - "CoreBoot can now boot x86 operating systems."

    What is special about Windows 7 that made it harder to boot / run under CoreBoot as opposed to Windows XP or 2003 Server?

    Should a bios even be able to tell the difference between booting Linux and Window's bootloaders?

    • by Anonymous Coward on Tuesday January 27, 2009 @11:34AM (#26623951)

      Booting Linux (and other free operating systems) is relatively simple: They quite robust against quirks in the BIOS, as they're usually not part of the testsuite of the BIOS vendors.
      It's also possible to boot Linux (and a smaller set of other free operating systems) without any PCBIOS interface (int 0x13 etc), as they don't rely on that.

      Windows does. There has been, for a couple of years, a useful, but very fragile hack called ADLO, which was basically bochsbios ported onto coreboot, to provide the PCBIOS.
      Recently, SeaBIOS (a port of bochsbios to C) appeared and was a more stable, more portable choice (across chipsets) in that regard.

      So yes, we're proud that we can run the very latest Microsoft system, simply because it's less a given than booting Linux.
      Even VirtualBox (commercially backed, and all) seems to require an update (very likely to its BIOS!) to support Windows 7. "We were first" ;-)

      • I've run Windows 7 in VirtualBox. I told it it was "Vista", because it didn't know about 7 yet.

        No update needed.

      • Thats a shame, i naively assumed that this meant windows 7 was no longer relying on the bios to report system information.

      • by dargaud ( 518470 )
        I had to write a bootloader for an embedded PPC board recently (and the associate Linux kernel). I was really surprised at how easy it was. Basically 3 lines of C: main(), an almost fake function using a static pointer and a return !
  • Just a random ramble, but why change the name from LinuxBIOS, surely it would have been easier to point out the irony of Windows needing Linux to start itself. Maybe it would have got some people to think more of the capabilities of Linux then?

    • by anothy ( 83176 )
      because it isn't linux, and really has nothing to do with linux. the original name was a marketing gimmick.
      • Re: (Score:2, Informative)

        by Daengbo ( 523424 )

        What is Linux BIOS?

        We are working on making Linux our BIOS. In other words, we plan to replace the BIOS in NVRAM on our Rockhopper cluster with a Linux image, and instead of running the BIOS on startup we'll run Linux. We have a number of reasons for doing this, among them: ... [LinuxBIOS.org, Aug. 2000 [archive.org], at the bottom of the page]

        You're wrong.

    • by McFly777 ( 23881 )

      This may not be the reason that this project changed its name, and IANAL, so take this with a block of salt, but one reason that I can think of immediatly is Trademark Dilution. Since the BIOS has little to do with Linux (and visa versa), using Linux in the name simply confuses things by suggesting a connection that isn't there. Really "Open BIOS" is more accurate than Linux BIOS, and CoreBoot is probably better yet from a trademark standpoint.

      Now, before somebody else says it, I read in another thread tha

    • the irony of Windows needing Linux to start itself

      First, CoreBoot isn't Linux. Second, Windows doesn't need it; plenty of closed-source BIOSes will boot Windows.

      The irony is that [closed-source] Windows can now use an open-source BIOS to boot itself, which reflects on the capabilities of OSS and not necessarily of Linux in particular.

    • because coreBoot doesn't run linux so the name is more relevant than LinuxBIOS

  • Given the slow move towards EFI, would it not make sense to make CoreBoot an EFI loader, with the BIOS support option? If it is EFI compatible I couldn't see it clearly marked on the website.

    • Re:EFI? (Score:5, Informative)

      by mhatle ( 54607 ) on Tuesday January 27, 2009 @11:42AM (#26624083) Homepage

      EFI is useful in the same way Open Firmware on PowerPC and Sparc is useful. It gives you an extensable system that can do different things with devices. This is great on a system where you don't know what the hardware may be (i.e. Workstations).. but starts to fall down when you get to servers, blades or embedded systems.

      On most systems these days BIOS or any type takes between 3 and 30 seconds to boot to the OS. This is simply not acceptable to many blade and embedded system designs.. (Even some server designs this isn't acceptable.)

      I can boot a system with coreboot in a second or less to the OS. This is really the most important part of coreboot. (For embedded systems, most of the time our target is in the .2 to .5 range from power on to OS start... this almost all but excludes ia32 from many embedded applications today.)

      • The only EFI I deal with is on Itanium servers. Not your typical "system where you don't know what the hardware may be".

      • Re:EFI? (Score:5, Informative)

        by Cyberax ( 705495 ) on Tuesday January 27, 2009 @12:48PM (#26625367)

        EFI allows lightning-fast boot.

        First, you can put your kernel in EFI (if there's enough flash) and boot it directly from there.

        Second, EFI itself is pretty much efficient - you have access to lots of RAM, CPU works in protected mode, etc.

        It's quite possible to have 1 second until kernel startup with EFI. Almost like on my 166Mhz MIPS board :)

    • Re: (Score:2, Interesting)

      Coreboot by itself is initialization firmware only. That means, it doesn't provide any callable interfaces to the operating system or its loader. So you cannot ask coreboot to load a block from disk. That's were BIOS, OpenFirmware and (U)EFI come into play to fill the gap. They don't define the firmware, but its interface.

      I haven't read the article, but I'm quite sure that they're using SeaBIOS - running on top of coreboot - to boot Windows. In this setup, coreboot performs hardware initialization and Sea

  • Why???? (Score:3, Funny)

    by Archangel Michael ( 180766 ) on Tuesday January 27, 2009 @11:20AM (#26623745) Journal

    My view is .. "it ain't done till Windows 7 doesn't run"

    I think they should tweak it so that Windows 7 boots, but every 30 mins or so it crashes. Call it ... Karma!

  • by bogaboga ( 793279 ) on Tuesday January 27, 2009 @11:23AM (#26623793)

    I wish we in the OSS world had "Open Source" printer chips and toner formula. These would enable anyone with the ambition, to build "free" printers instead of shelling dough to these greedy companies.

    • It's an interesting idea. I think open source toner might be a bit tricky (likely the manufacturing process is difficult) but it certainly might be possible to work out the components for an open source driver board that could be used as a module in existing printers, bypassing the "chips" and allowing the simple use of third party toner. It might then be possible to move forward with open source printer hardware from there.

    • I wish we in the OSS world had "Open Source" printer chips and toner formula. These would enable anyone with the ambition, to build "free" printers

      The start-up in hardware and consumables doesn't more than ambition - he needs divine intervention.

      The competition is Cannon - Dell - HP - Lexmark.

      The competition is guaranteed shelf space in every WalMart.

      Every drugstore or mini-mart in a town big enough to rate a single traffic light.

    • The answer is simple: just don't buy a printer that requires chips embedded in the toner cartridges. Most laser printers, AFAIK, still don't. Even my HP Laserjet 2300 doesn't require it (though it will give you a warning message when you turn on the printer with a non-HP cart, but that doesn't matter).

      It's mainly the stupid inkjet printers that have this problem, and anyone stupid enough to use an inkjet printer deserves to be ripped off, when laser printers are so much cheaper to operate, and only cost $

  • AMD Geode (Score:4, Interesting)

    by chill ( 34294 ) on Tuesday January 27, 2009 @11:39AM (#26624019) Journal

    Looking at the CoreBoot site, it seems there best support is for the AMD Geode chips. It is ironic that this Slashdot article is one after the article saying AMD has no successor planned for the Geode line and it may fade away.

  • I noticed in the first system info image that the processor and memory fields said "Not Available" but in the 2nd image labeled "system information closeup" it then shows data populating those fields.

    Maybe I've been trained to look for tricks in marketing campaigns run by Microsoft but these kinds of details should not be overlooked or else it gives a sense of being falsified.

    LoB

  • It deserves praise and support previously reserved for deities. I'm only overstating this a little.

    The benefits of CoreBoot are many:
    1. Cheaper motherboard manufacturing. TPM chips are expensive components when mass producing motherboards.

    2. CoreBoot gives hardware manufacturers a viable market that Microsoft and Apple cannot touch.

    3. Keeps the hardware open for all operating sytems and devices. This simple fact cannot be stressed enough. As the world slowly migrates to 64-bit everything. Microsoft has

  • BIOS is way, way obsolete. The bigger question is whether or not Windows 7 will be bootable on EFI machines. This article [is.gd] says that Windows 7 "delivers support" for EFI. It is likely that means that it will be bootable on EFI equipped machines, but there's wiggle room there.

    EFI and GPT offer real improvements beyond merely sweeping away the legacy cruft of decades of backwards compatibility. It's time to move on.

  • With Intel, Apple, and rapidly the entire rest of the x86, x64, and IA64 hardware world moving to (or explicitly running on, in the case of Apple and IA64) EFI, what is the whole point of CoreBoot besides being a nifty experiment? Intel won't let anything replace EFI anytime soon. Trust me - I've had EFI shoved at me for almost a decade.
    • BIOSes only boot MBR/FDISK disks. And MBR tops out at 2TB. So now that the largest drive that MBR can support was announced today, isn't it time to move forward to a new boot system that can boot GPT disks (which can go well past a petabyte)?

      EFI can do this, I can't see why we shouldn't be going to it.

      • by EXMSFT ( 935404 )
        Absolutely. EFI can support BIOS-based implementations as well (witness BootCamp on the Intel Macs - which does exactly that). Bye bye BIOS.
  • by SpaghettiPattern ( 609814 ) on Tuesday January 27, 2009 @02:44PM (#26627521)
    I thought: "Wicked! My mojo will finally be complete! Shagadellic!"

    Then I read there's even a "Flashrom Live CD". < 185MB. Instant and spontaneous ejaculation!

    Then it came to me:

    This page is a work in progress. There is no ISO yet. This is just a plan for now.

    This is exactly like a fine woman looking at you. Saying she will give it to you. Eventually. Maybe. Cock teaser.

    Still a beautiful woman I like having around.

    Now please excuse me. I'll be off masturbating.

  • ...the linux bootloader! (We'll have to wait for the desktop.)

Don't tell me how hard you work. Tell me how much you get done. -- James J. Ling

Working...