Why Xbox One Backward Compatibility Took So Long (ign.com) 62
A new report from IGN this morning explains why it took so long for backwards compatibility to be supported on the Xbox One. Microsoft veteran Kevin La Chapelle says the answer to the question can be found in 2015 -- the year that Phil Spencer announced backwards compatibility at Microsoft's Xbox E3 media briefing. From the report: The fan-first feature has evolved from an experiment conducted by two separate Microsoft Research teams into a service planned for Xbox One's launch -- complete with hardware hooks baked into the Durango silicon -- until the well-publicized changes to the Xbox One policies (namely, stripping out the always-online requirement for the console) forced it to be pushed to the back burner. It's obviously back for good now, and expanding into original Xbox compatibility of select titles on Xbox One (the first batch of which we announced today). Even the Xbox One X is getting involved, with a handful of Xbox 360 games getting Scorpio-powered enhancements like 10-bit color depth, anisotropic filtering, and up to 9x additional pixel counts displayed on screen. [...]
It was 2007. One of [the research] teams was working on PowerPC CPU emulation -- getting 32-bit code, which the 360 uses, to run on the 64-bit architecture that the third-generation Xbox would be using. The other team, out of Beijing, started writing a virtual GPU emulator based on the Xbox 360 GPU architecture. "These were like peanut butter and chocolate," Microsoft VP of Xbox software engineering Kareem Choudhry recalled. "[So we thought,] 'Why don't we put them both together?'" Choudhry did just that, and so the first steps to Xbox One backwards compatibility were taken, long before the console had a name or anything remotely resembling final specifications. As Durango crystallized, so too did plans for Xbox 360 compatibility on the new machine. "This was primarily a software exercise, but we enabled that by thinking ahead with hardware," Gammill explained. "We had to bake some of the backwards compatibility support into the [Xbox One] silicon." This was done back in 2011. Preliminary tests showed that support for key Xbox middleware XMA audio and texture formats was extremely taxing to do in software alone, with the former, Gammill noted, taking up two to three of the Xbox One's six CPU cores. But a SOC (system on chip) -- basically an Xbox 360 chip inside every Xbox One, similar to how Sony put PS2 hardware inside the launch-era PS3s -- would've not only been expensive, but it would've put a ceiling on what the compatibility team could do. "If we'd have gone with the 360 SOC, we likely would've landed at just parity," he said. "The goal was never just parity." So they built the XMA and texture formats into the Xbox One chipset...
It was 2007. One of [the research] teams was working on PowerPC CPU emulation -- getting 32-bit code, which the 360 uses, to run on the 64-bit architecture that the third-generation Xbox would be using. The other team, out of Beijing, started writing a virtual GPU emulator based on the Xbox 360 GPU architecture. "These were like peanut butter and chocolate," Microsoft VP of Xbox software engineering Kareem Choudhry recalled. "[So we thought,] 'Why don't we put them both together?'" Choudhry did just that, and so the first steps to Xbox One backwards compatibility were taken, long before the console had a name or anything remotely resembling final specifications. As Durango crystallized, so too did plans for Xbox 360 compatibility on the new machine. "This was primarily a software exercise, but we enabled that by thinking ahead with hardware," Gammill explained. "We had to bake some of the backwards compatibility support into the [Xbox One] silicon." This was done back in 2011. Preliminary tests showed that support for key Xbox middleware XMA audio and texture formats was extremely taxing to do in software alone, with the former, Gammill noted, taking up two to three of the Xbox One's six CPU cores. But a SOC (system on chip) -- basically an Xbox 360 chip inside every Xbox One, similar to how Sony put PS2 hardware inside the launch-era PS3s -- would've not only been expensive, but it would've put a ceiling on what the compatibility team could do. "If we'd have gone with the 360 SOC, we likely would've landed at just parity," he said. "The goal was never just parity." So they built the XMA and texture formats into the Xbox One chipset...
Re: (Score:1)
As software goes, console games are architecturally horrible. This is mainly because of the legacy of 8- and 16-bit consoles where it was actually significant whether a program took a syscall (generally implemented as data-dependent branches), so optimizations like inlining are looked upon favourably even as they fix program to platform down to the hardware register. Those optimizations have been worthless since the race to half a gigahertz ended and RAM latency began to really get out of control, because since then syscall stubs (etc.) have been cacheable just like any hot-path thing, so doing a massive amount of them in a loop turns from an obstacle to effective utilization of hardware.
The lesson here is that one can always trust Microsoft to code like an obsessive twentysomething.
You have been watching too many Turboencabulator videos.
Re:That's to say: (Score:5, Insightful)
I've worked on original Xbox and Xbox 360 games, and I can't figure out what you're talking about. Microsoft's consoles are nothing like those in the 8 or 16-bit days, when game code interacted directly with the console hardware, programming "to the metal", as it were.
Rather, modern console development is actually pretty similar to programming on Windows (which shouldn't be surprising) where you write your code in C++, call DirectX and system API calls, and mostly rely on the compiler for low-level optimizations, while code architecture plays more of a part in high-level optimization. There are low-level intrinsics which you can use, but these are typically used quite sparingly in very performance-sensitive code, such as in your vector or matrix class operations, or for lockless thread-safe containers, etc.
Re: (Score:1)
>Microsoft's consoles are nothing like those in the 8 or 16-bit days,
Yet you insist on reading me like I had said so!
To clarify just for you, I'm pointing out that inlining (among others) is a petty optimization such as those that were relevant twenty years ago, which Microsoft holds on to because of reasons undiscussed and effects unmeasured. This makes programs for their consoles _fucking awful_.
Re: (Score:1)
I don't get what GP commenter is getting on about. The original XBox isn't musty old 8 or 16 bit hardware. It's got a Pentium 3 processor in it. Hardly the kind of processor that you do 'inline optimization' type coding on. I suspect there are lots of driver programmers who program Pentium 3 processors with ASM optimizations. I doubt very much that there were/are many console game coders touching ASM on a Pentium 3 processor.
Re: (Score:2)
Re: (Score:2)
I don't kn
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Inlining is still a valuable optimization technique for when the overhead of a function call exceeds the execution cost of the function body, but obviously this has to be weighed against cache size considerations. This is not something videogame developers are unaware of. For the past several decades, the general rule of thumb is to avoid excessive inlining, because it works directly against cache coherency.
Generally speaking, when writing C++ code, you have some control of this via how code is organized,
Re:That's to say: (Score:5, Interesting)
Do you know where the XBox got it's name? Microsoft wanted to make a box that runs DirectX and sits on your TV. This "DirectX Box" got shortened to "XBox", which stuck as the final product name.
The very core idea driving the project was that the system would run Windows & DirectX. You're coding against high level APIs on the Xbox, and can't opt out of it. It was a key design goal to have people coding against Microsoft's APIs instead of coding to the hardware.
As for the 8 and 16 bit consoles, syscalls were irrelevant. There was no operating system on those systems. You might have had some *really* simple, common features coded into the system rom, but otherwise *all* the code was contained within the individual game. These systems only had a bare minimal amount of storage in them - just enough to store any code needed to boot the system and hand off control to the game. Including storage in the system was too expensive to have any significant amount of code built in.
Re:That's to say: (Score:4, Insightful)
Then they failed. Fucking utterly failed.
Anyone who's written DirectX shader code for the Xbox knows about the limitations of the hardware and the optimizations you had to make for it, let alone the custom texture compression scheme that was used on no other platform (DXT != S3 despite Shawn Hargreaves saying otherwise).
Then you had XACT, the "Cross Platform" Audio Creation Tool, creating collections of .xgs (Xbox Global Settings), .xsb (Xbox Sound Bank) and .xwb (Xbox Wave Bank) files with custom audio compression schemes just for the Xbox audio hardware. They gave excellent results, but it was in no way portable to Windows nor any other gaming platform.
Re: (Score:1)
Then they failed. Fucking utterly failed.
Anyone who's written DirectX shader code for the Xbox knows about the limitations of the hardware and the optimizations you had to make for it, let alone the custom texture compression scheme that was used on no other platform (DXT != S3 despite Shawn Hargreaves saying otherwise).
Then you had XACT, the "Cross Platform" Audio Creation Tool, creating collections of .xgs (Xbox Global Settings), .xsb (Xbox Sound Bank) and .xwb (Xbox Wave Bank) files with custom audio compression schemes just for the Xbox audio hardware. They gave excellent results, but it was in no way portable to Windows nor any other gaming platform.
Who said ANYTHING about portability being a design goal? Everything you described used layers and layers of Microsoft APIs no? I’m not making a sideways joke about Microsoft’s overall strategy, but you could take it that way. They nailed this design goal.
Re: (Score:2)
Then they failed. Fucking utterly failed.
I'd say the product was successful.
Re: (Score:2)
I do not get why most consoles are not backward compatible.. what do MS, Sony, Nintendo etc think... that we all want to have 100's of consoles standing around, just to be able to play both the old and new games?
The technical reason or the economic one?
Technically, all three companies switched CPU architectures and not just from one CPU generation to the next. MS (PowerPC -> x86), Sony (Cell -> x86), Nintendo (PowerPC -> ARM). On the GPU side, there were lesser changes MS (ATI -> AMD), Sony (Cell -> AMD), Nintendo (ATI -> NVidia) with MS having the easiest transition. Games can be emulated if the newest console has enough CPU/GPU processing power. However the trend for this generation is that the
Re: That's to say: (Score:1)
Syscall on an 8-bit system? There was no OS, no standard interrupts like in DOS, and no ROM firmware like old BIOS.
The parent poster seems to not understand old or new consoles.
Re: (Score:2)
Oh do check out an actual 8-bit system. You'll find ROM entry vectors everywhere: unless you figure the syscalls are somehow long enough to fit in three bytes...
Re: (Score:2)
Short answer: (Score:3)
One simple reason: Microsoft did what they do best (Score:3)
Re:One simple reason: Microsoft did what they do b (Score:5, Insightful)
Re: (Score:3, Interesting)
(1) They did include at least some backwards compatibility in the hardware (XMA audio and the textures format) which may or may not have been necessary. (2) The whole point the GP was making was precisely that they didn't go the simple route: inclu
Re: (Score:2)
Re: (Score:2)
Re: (Score:3)
Sony also have a software emulator, although the early PS3 hardware included a complete PS2 because the software emulation wasn't ready at the time.
Re:One simple reason: Microsoft did what they do b (Score:4, Insightful)
Re: (Score:3)
In particular, how they're emulating PowerPC LL/SC on x86 without heavyhanded methods such as virtualizing all memory accesses to LL'd pages with the MMU.
Re: (Score:3)
If Microsoft are smart, they would have implemented a dynamic recompiler similar to what Apple did with Rosetta or what MAME and Dolphin and other emulators are doing. Writing a dynamic recompiler for the PowerPC isn't exactly a new thing.
Re: (Score:1)
Re: (Score:3)
I bet the XBOX One CPU has more power than the CPUs in the first Intel Mac systems (where Apple used a PPC Dynarec to support PPC apps on Intel Macs)
Re: (Score:2)
What's that got to do with emulating LL/SC? Look it up; the issue is a "little bit" more involved than emitting the right instruction.
Re: (Score:2)
I think it probably is even more involved when it comes to emulating the AltiVec instructions. LL/SC would definitely pose a challenge since those are used for atomic operations on many RISC processors.* This is different than how Intel implemented atomic support with compare/exchange.
The next big thing going forward in terms of atomic support is transactional memory. [wikipedia.org]
*Modern RISC processors are moving away from ll/sc to use dedicated atomic instructions like add, increment, swap, etc. due to scalability iss
Re: (Score:1)
That would be a dumb move. Games are not the same as computing applications which keep the CPU idle for the most part, waiting on user input. None of the high performance applications worked on Rosetta. Console games work the CPU GPU and memory hard, to extract maximum performance. The problem Apple was solving is not in the same domain or at the same scale. Perhaps it appears like it is to non-technical people such as yourself.
So much for cheap old Xbox games (Score:2)
Re: (Score:2)
Did it need a GPU emulator? (Score:1)
Re: (Score:2)
Silly console peasants (Score:2)
Join me and ascend to God hood for you are weak! [youtube.com] And I am mighty!
32 bit on 64 bit (Score:3)
Gesture (Score:1)