Explaining Disappointing XScale Performance In Pocket PCs 133
JYD writes: "I found this new article on a Pocket PC web site where Microsoft talks about why XScale Pocket PCs aren't as fast as people thought they would be. Is it the OS? The CPU not supporting ARM4 properly? I wonder if the Linux port would run faster on 400 Mhz ... or did Intel screw up the CPU?"
You think thats slow (Score:3, Interesting)
The CPU will be fully delay insensitive and asynchronous to stop power and clock glitch attacks.
We are currently looking at 4 Mhz on 0.18 process.
Re:Judging by modern Linux DEs.... (Score:2, Insightful)
If you take Linux (source based, optimized for cpu) and a modern window manager like enlightenment (if you think its not modern, prove it in a reply) with a preemptive kernel and put it on a Celeron ~500mhz with 128mb of PC100 SDRAM it WILL BEAT the Windows 98 in speed, although it is different.
I just don't see how people assume KDE and Gnome are "modern" because they resemble Windows. Is that the trend?
Re:Judging by modern Linux DEs.... (Score:1, Informative)
Re:Judging by modern Linux DEs.... (Score:1)
It's hard to build a modern computing environment on top of a non-modern operating system (Unix, Linux). Again, GNU/Linux *not* being modern isn't a dis, it's just the way things are. It works. I use it on my machine, largely as a host for a more modern OS (but far more spartan at this point). And no, I'm not talking about Mac OS X being some epitome of "modern," it's based on all the same things. Unix is so.... 50s in it's ideology.
And before a bunch of escaped slashtards flame me, i repeat, THIS IS NOT A BAD THING. If you like Linux, GNOME or KDE, use it. Some of us, however, are not satisfied with the limits they impose on the way we work and are trying to forge something new.
Enlightenment isn't any more modern than GNOME or KDE. But it may be a little more fun.
Re:Judging by modern Linux DEs.... (Score:1)
Uhm, it was first invented in 69 and improved to what we would recognize as Unix in the early 70's.
Re:Judging by modern Linux DEs.... (Score:2)
But most of the core ideas were developed in the 50s. Just because Win98 came out in 1998 (or was it 99?) it doesn't mean that it's technology is of the 90s. That's pretty much from the 50s too.
I should've known that comment would confuse many.
Re:Judging by modern Linux DEs.... (Score:3, Informative)
Computing in the 50's was a very different thing, so limited that the idea of wasting cycles on things like memory management or protected memory would have been considered insane. It wasn't until hardware developed to the point where there were cycles and memory to spare that anything like Unix (or MULTICS, which is where most of Unix's ideas were developed) became possible.
himi
Re:Judging by modern Linux DEs.... (Score:3, Informative)
Linux with KDE is slower than Windows 98 basically for two reasons. The first is that Linux does more stuff. For instance, it runs various daemons in the background to allow for remote access, it journals filesystem logs, it implements proper crash protection, it has a usable command line with virtual terminals etc. Windows 98 doesn't have these things, so it can be faster.
The second reason is that KDE is written largely in C++, and the Linux C++ linker is inefficient (it is much faster at C). The programs run fine, but they take longer to start up, which is what makes it "feel" slow. Gnome should in theory be faster, but they kill any speed increase they'd otherwise get by having a slower (well, in v1.4) graphics library and by using incredibly heavy things such as CORBA for ipc, and a daemon for configuration etc.
The reason other window managers (not just ancient ones, others such as WindowMaker or E) are faster is because a) they are simpler and b) tend to be written in C
The speed of GTK is improving, though CORBA/ORBit will always be slow on the gnome side imho. The Linux Linker issues with C++ are known about and are being resolved, which will lead to much better performance.
Another problem is that some modern distros are quite bloated. My SuSE 7.3 box loads all sorts of stuff at startup that I don't actually need, but I never got around to switching it off. Combined with the slow start of KDE and the fact it loads after login (which windows does before login), and it begins to feel slow.
Performance is improving, however it's still largely in the hands of the GNU folks and the distro companies.
thanks -mike
Re:Judging by modern Linux DEs.... (Score:2)
I don't know about the graphics library thing, but the GNOME ORB is somewhat stripped down to make it faster. Unless you actually serve objects remotely over the net, the GNOME CORBA ORB basically just adds a little bit of function call overhead. I'm willing to accept that tiny bit of overhead for a tested, industrial-strength object model like CORBA. KDE, as I understand it, is inventing their own somewhat lightweight object model, and I'm worried they will later find some situation where they wish they had left in a feature they stripped out to make it lightweight.
As for Win98, don't forget that it is a candy coating around Win95, and Win95 was aggressively optimized for size and speed. The target machine for Win95 was a 486 with 4MB of RAM. There is a bunch of assembly language in there, in critical places; and some of it is even 16-bit code. (16-bit code is much harder to write, since you have to cram things into smaller spaces and you have to explicitly handle near/far pointers, but it's tiny! Even with the thunking overhead, it won't slow you down too much if you just use it rarely.) Also don't forget that the MS C compiler actually does produce very good code, better than GCC is able to now. (Although I hear good things about the latest version of GCC, I don't think they have caught up with MS C yet.)
The tiny size of the Win95 core means that it caches well, too, and a high cache hit rate makes for speedy performance.
I'd be interested to see benchmarks of Win98 vs. a really stripped-down Linux system (no daemons running, etc.) that was compiled with aggressive optimizations and is running a really lightweight window manager (IceWM or ROX or something). And defintely no Nautilus; try your system with ROX Filer instead. I saw a huge speed jump when I did that. (Debian makes it so easy to try such experiments!)
There is still room left in Linux-based systems for size and speed improvements. Every time GCC gets better, every part of the system gets a little better. And I don't believe that much work has been done on either GNOME or KDE to make it stripped-down lean-and-mean... the first law is "make it work before you make it faster", and folks are still busy making it work right. (But Nautilus has had a lot of speed work done on it lately, and I've heard it is much improved compared to its 1.0 release.)
steveha
Re:Strange (Score:1)
Cant find the link but (Score:3, Interesting)
Re:Cant find the link but (Score:1)
I was off in my above assessment BTW. Snippit from the French
A small video test under Pocket TV as a proof: Ipaq @ 206: 23 Fps Xscale @ 400: 19 Fps! Xscale @ 200: 14 Fps!!
Maths (Score:2)
Re:Cant find the link but (Score:5, Interesting)
Re:Cant find the link but (Score:1)
Re:Cant find the link but (Score:1)
Re:Cant find the link but (Score:2, Informative)
That's not talk, that's regurgitation (Score:2)
On the other hand, Intel often gives little thought to enhancing performance of old code on new processors. If memory serves me right, Intel's Pentium Pro ran 16-bit code embarrassingly slowly.
-jhp
Re:That's not talk, that's regurgitation (Score:1)
No, Intel gave a lot of thought to that. It takes several years to develop a complex CPU like the pentium family. They just thought that Micro$oft would have a 32-bit operating system out by the time the PPro was released. Oops! Windows 95 wasn't completely 32-bit despite all the "32-bit" marketing and hype. And it was so new that everyone was still running a lot of 16-bit windows 3.x software. Of course, performance was much better if you were running a Real Operating System ;)
Re:That's not talk, that's regurgitation (Score:2)
Besides, the much-vaunted new feature of the PPro was the CISC->RISC translator, and it shouldn't take much to rejig that to handle 16-bit mode more effectively if the market (asses that they are) demands it.
-jhp
Re:That's not talk, that's regurgitation (Score:1)
yep. Been there, done that. Well, almost. Computer Architecture was probably my favourite class in Uni. We didn't implement it ourselves, but that was about 90% of the class lecture and notes. Starting with simple logic gates, we went through how to build registers, latches, ALU's, register files, all the way to pipelining. Fascinating stuff if you can stick through it all and have a great lecturer! It really gives you an appreciation of how the stuff works.
I didn't know that. All I knew was that Intel was betting on M$ migrating everyone over to 32-bit software by the time it was released to market. Considering their close deals in the past, I'm sure this was based on information that M$ had given them.
To end this post on a non-anti-MS note, the CISC-RISC converter is software upgradeable. Recent Linux kernels provide a /dev/microcode device so that you can feed it a file (presumably) supplied by Intel. See http://www.urbanmyth.org/microcode/ [urbanmyth.org] for more information.
Amulet cores (Score:5, Interesting)
The Amulet 3 [man.ac.uk] runs at 120 MHz and consumes very little power. Most of all its asynchronous so when you dont have mych processing to do it just sits there consuming "no" power.
They take a hell of a beating and still run. I connected one to a hamster wheel and you can see it here [man.ac.uk] running despite the power fluctuating madly.
The only reason it only goes at 120MHz is because the memory isnt fast enough.
Its a little strange that only three ARM production lisences were given out. One to intel one to motorola and one to Amulet group.
Re:Amulet cores (Score:1)
When you say 120 MHz do you mean that it has the equivalent performance of a 120 MHz ARM? I'd thought that an asynchronous chip didn't have a clock speed as such.
Re:Amulet cores (Score:2)
Which is more than a non superscalar part at 120MHz could do.
Re:Amulet cores (Score:2)
Aha... thanks :-)
Re:Amulet cores (Score:1)
Before prime-time (Score:1)
Re:Before prime-time (Score:1)
Re:Before prime-time (Score:1)
Stranding Users... (Score:5, Interesting)
Umm... right, that's why my PocketPC 2000 Cassiopiea E115 is now as useful as a doorstop as it has a MIPS chip in it.
When I got my PocketPC, MS touted that 'software matters' - even in their publicity. Suddenly, they ditch all the SH3 and MIPS users and just support ARM in PocketPC 2002. Not only that, but applications like Terminal Services and Messenger they won't release for the older machines. I see a lot of people saying that this is becasue PocketPC 2002 is based on CE.NET - that's not correct. PocketPC 2002 is just another revamp of PocketPC 2000, which are both based on CE 3.0. So when it all boils down, it's just Microsoft playing marketing tricks. Net result of their decision - my £450 PDA became obsolete in 18 months.
I now own a Palm.
Re:Stranding Users... (Score:2, Interesting)
If you ask the users, the current installed base of PocketPC systems is as follows:
PocketPC 2002 - 1%
PocketPC 2000 - 0.5%
PocketPC "I don't know what version" - 98.5%
We can target either of the first two quite easily, but the last operating system in the list has no programs that are compatible with it.
Re:Stranding Users... (Score:3, Funny)
Umm... right, that's why my PocketPC 2000 Cassiopiea E115 is now as useful as a doorstop as it has a MIPS chip in it.
Sorry, there were only 1,999,999 users of that specific system, so it was below our threshold.
I would MOD this up... (Score:1)
Well, mail it to me (Score:2)
Pocket PC hw spec lockdown (Score:3, Insightful)
It could be the OS, which is the obvious answer since it's a Microsoft OS, and this is Slashdot. But I don't know. I've never tried running anything other than PocketPC OS on the iPaq, and probably never will. (It's a work thing.)
How did Microsoft become so popular? It was DOS, wasn't it? The program that ran on any x86 computer. Well, Microsoft should take a page from their previous success and allow a little more flexibility in PocketPC design. The main gripe that I and everyone else has about these gizmos is that they're locked into a 240 by 320 by 16-bit color display. That's lame, especially if one of the highlights of PocketPC is how easy it is to port your Win32 app. If you have to redesign all the screens to fit in a tiny-ass space, it's easy on the coders but hell on the systems analysts.
It looks to me like Palm have a much more open approach, they are using the same tactic that established Microsoft's dominance with DOS back in the 80s. You can get that new Sony Clie' with TWICE the screen real estate (as in pixels) of ANY PocketPC available. Kind of a no-brainer if you ask me.
Off to the solstice parade!
Re:Pocket PC hw spec lockdown (Score:1, Troll)
Sure, you can get a Sony Clie with a 480x320 screen. But why would you want to, you'd have to put up with that sad excuse of an OS. Makes a pretty kickass (and damned expensive) oraganizer, I'm sure. What good is a 480x320 screen if it's about the same size (in cm by cm) as the other options *and* there's no real handwriting recognition?
Personally, I still carry around a 5 year old Newton 2100u most of the time. I have an iPAQ as well, for development mostly. It has a 480x320 (lower DPI than the Clie, which means more space to write!) screen and a 162 MHz StrongARM. And a real OS, with the facilities to develop first-class apps while never touching a desktop. Seems like a no-brainer to me too. But we're on Slashdot, and it doesn't run Linux, so I'll just kick back and wait for the on-slaught of "h3y j00 st00p1d mac lover fsck u && UR pee-pee-pda!!!1"
Have fun at your parade!
Re:Pocket PC hw spec lockdown (Score:5, Interesting)
The Newton 2100 kicks ass. I used Palm and Windows CE before finally trying out a Newton 2x00 series. The Newton made me swoon.
It's the best damn computing device out there, PC, PDA, or otherwise. I used to do my e-mail, my diary-keeping, my word processing, etc. on my PC in Linux, but now I even write my books and do 90% of my e-mailing on my Newton 2100 directly over ethernet. I read news on it, make travel plans on it, I have my household inventory on it (in Notion)... and I read BBC World News and Slashdot on it in Newt's Cape.
The PC only gets touched every few days. The Palm and CE devices are long gone. I only regret that Apple killed the Newton, so there won't be a color version.
Re:Pocket PC hw spec lockdown (Score:1)
Um, maybe for viewing extremely clear and vibrant JPG's, watching widescreen movies, reading eBooks that don't strain the eyes, etc, etc. The Clie NR70 is just lovely, IMHO.
Re:Pocket PC hw spec lockdown (Score:2)
They keyboard is sure badass, though. A remarkable piece of industrial design.
Re:Pocket PC hw spec lockdown (Score:2)
Synopsis of "interview' (Score:5, Funny)
Q: What could possibly have gone wrong?
A: While we acknowledge that some peoples' perception is of something having gone wrong, we believe that any wrongness is unavoidable.
Q: Well, some analysts say it's intel's fault
A: We have implemented what we could implement, and don't believe there is any implementable implementation that would implement significant gains.
Q: Analysts also say it will be 2004 before the issue is fixed
A: It is too early to talk about 2004. That said, we are committed to delivering a good product.
Q: This is really bad news for the Pocket PC platform
A: Yes, it is. However, fortunately the issue is so small that this really isn't bad news for the Pocket PC platform.
Cheers
-b
Re:Synopsis of "interview' (Score:3, Funny)
Re:Seems obvious, bus speed & not enough cache (Score:2, Informative)
If there is not enough cache memory increasing processor clock speed will not have a positive affect on performance because the real effective clock rate will be bound by how fast the processor can fetch data from main memory.
Re:Seems obvious, bus speed & not enough cache (Score:3, Interesting)
The main new instructions are:
- a "find first one bit in word" instruction, which helps software division and huffman encoding
- some DSP-instructions like 16x16 bit multiplication/40Bit add for filters (audio-encoding, etc)
Both these enhancencents more or less require assembly coding
The other major architectural enhancements are branch-prediction (offset by higher penalties on branch misses) and larger caches (32K dcache versus 8K and 32K icache vs 16K, if i remember correctly)
However, the cache latency has increased from 1 to 3 cycles.
It means that when you load a value from memory and hit the cache, the compiler needs to find 3 unrelated instructions you can execute before you can use the result in the fourth instruction after the load.
This is a severe blow if your compiler does not figure it in, and even if it tries, or if you use assembly, you often cannot find three such instructions (table walks, or under register pressure)
In the worst case (table-walk, LUT's), this effectively halves your processor speed.
As far as i know, the bus interface has not improved from the SA1110, and this was not too efficient to start with (does not exploit accessing preloaded bank, cache-line has to be
Apart from that, there are some issues in the PXA silicon, which I think force some timeconsuming workarounds (extra cache flushes, Writeback-cache does not work, slow bus cycles). I would guess that these affect performance even more than the 100MHz SDRAM clock - after all that's about what you find in your 1GHz+ P-III-design.
However, this is only what i gathered from the datasheets, I have not yet used a PXA system as it does not yet seem to be an improvement over the SA1110 that justifies a new design.
Re:Windows CE, ugh. (Score:1)
next
Re:Windows CE, ugh. (Score:2, Informative)
Bet they're focussing on battery life (Score:1)
Thats not such a bad thing, most of these things run address books and sync to email. The battery is the real problem with them, not the fact it can't encode video streams!
Sure they'll get a few complaints, but nothing like the slating they've been getting for the battery life problem.
that deserves a... (Score:2)
Well, that statement clearly deserves a +5 Funny.
Markedroid Gobbledygook (Score:1)
Re:Markedroid Gobbledygook (Score:2)
Are you talking about your average consumer, or are you talking about the average slashdot reader?
(Afer allhow many slashdotters would buy this product anyways?)
It's the OS (Score:2)
Interestingly, Asus in their upcoming Xscale PPC is coming up with workarounds, such as on the fly automatic clock and voltage throttling [anandtech.com]. So while the Xscale supports capabilites that MS is not using, the vendors are not waiting for next year for MS to get their act together.
Hopefully the vendors will also figure out a way to speed up the terrible benchmarks [pocketnow.com] of the Xscale PPCs.
Re:It's the OS (Score:2)
Re:It's the OS (Score:1)
Bonus *off* for replies.
Re:It's the OS (Score:1)
Re:It's the OS (Score:2)
Um, yes, it is new silicon. That doesn't make the benchmarks any better. It's new silicon with terrible benchmarks. What is your point? Saying it will get better later doesn't make it good now.
And while you only care about MPEGs, some people care about performance and battery life. Some people run apps that use a bit more processing power than "contact lists", even if you don't.
Bonus *off* for replys.
Re:It's the OS (Score:2, Interesting)
Re:It's the OS (Score:2)
Re:It's the OS and the Compiler (Score:2, Informative)
Re:It's the OS and the Compiler (Score:1)
Good Application for PR Rating (Score:2, Funny)
*Actual clock speed 400 mhz
Re:Another case of inflated MHz not paying off? (Score:2, Interesting)
It's simply Intel moving to a new instruction set (ARM V5) and building a (slow) emulation of the old one (ARM V4), and Microsoft says it would be horribly difficult to support two different instruction sets, so the choice was to either live with the new CPU performing slower than the old one or cut off support for the old hardware.
Hmmmmm, yet another thing (like the OS modularity) that MS seems to be unable to do, while my Gentoo Linux is doing it by default. The sourcecode to their products has to be a complete and utter mess if they can't even get it to take advantage of new instruction sets without dropping compatibility.
This is not correct (Score:1)
> It's simply Intel moving to a new instruction set (ARM V5)
> and building a (slow) emulation of the old one (ARM V4),
> and Microsoft says it would be horribly difficult to
> support two different instruction sets, so the choice was to either
> live with the new CPU performing slower than the old one or
> cut off support for the old hardware.
This is not correct.
All ARMV4 instructions are implemented natively in the XSCALE core.
The XSCALE core, just as the SA1110, executes almost all ARMV4 instructions in one clock, and, as far as I remember, uses more clocks only for very few instructions:
- shift register by register (2 instead of 1)
- mul / mul-acc (extra latency cycle in some cases)
- branch miss in the added BPU
- maybe some coprocessor accesses
Except for an assembly rewrite of some inner loops in the kernel, there is not much MS can do about the Memory interface that hasn't scaled with the CPU clock.
I do not think that compiler tweaking will gain much more than 10% in performance.
Backward compatibility == poor excuse (Score:1)
Translation : "We can't or won't write portable code."
There's absolutely no technical reason they can't take advantage of the V5 enhancements while still retaining support for ARM V4 and a common code base. This must have been a business decision, but I can't fathom the thought processes which led to it.
Re: (Score:2)
Re:Backward compatibility == poor excuse (Score:1)
Yes I did, and they don't hold any water. I suggest you read your response above and consider why it doesn't make any sense (hint - the OS is not an application). Like I said, this has to have been a business decision.
Having said that, your point about MS not liking multiplatform support is spot on. MS has some very competent programmers - that's not at issue. The problem is at the corporate decision making level.
As to solving all their problems with a tarted up p-code system, well, if that's going to work at any sort of acceptable speed then they'll *really* need to optimise for the processor.
And the Linux answer to Dotnet is...? (Score:2)
Let's hope your skepticism is justified. Because if it isn't, Linux as a platform will be in very serious trouble.
Linux has no answer to cross-platform code, the one exception being Gnome with Mono. If that remains the only effort, and continues to attract hype and developer support, one day soon we'll wake up and find that the single viable open source platform to write to is under the technical direction of Microsoft.
However did this happen?
I might add..... (Score:4, Interesting)
I'm not suprised (Score:1)
Just push clockspeed up at any cost - who cares about performance? It's already running windows - so what can you expect?!
Re: (Score:3, Informative)
Re:I'm not suprised (Score:1)
What? It's true that clock speed alone isn't a valid measure, but it is certainly a very important part of the equation. Intel's "Megahertz sells" paradigm is turning out to be a pretty effective strategy. They have been able to jack up their clock speeds almost at will (2.533 GHz on the P4 now with a clear path to 3.3 GHz by the end of the year). On the other hand, AMD has been struggling all year to speed up their processors. The result is that Intel is leaving AMD in the dust [tomshardware.com]. As toms hardware puts it, "the Athlon design is already a bit outdated and is now reaching its limits."
The consensus is that performance problems with the new XScale platform are because of poor software - not because of flaws in the hardware.
Re: (Score:1)
Is this a PDA? (Score:1)
People buy PDAs for cheap lap tops or simple organisers. Nither needs speed.
Pocket PCs can get faster and faster while Palm Os PDAs outsell them.
The Palm Os devices are cheaper and use less power.
This is becouse they are slower.
It's not speed... it's memory....
Handspring Visors have memory cartrages and the Palm m500 use media cards so while Power PC devices play with added speed and don't get it Palm os devices get added memory.
Thies things are just portable databanks they aren't for processing information just storing it.
Want to play MP3s? Slap on an MP3 player... a sound chip that has an mp3 incoder built in and some added ram.
Want to do presentations? Slap on a presentaion device.
Go on the Internet? Snap on a wireless... (Unless it's built in)
Play Quake? compile data?
Hotsync with desktop...
I'm looking for a keyboard and a wireless for my Visor (the i705 can't handle telnet) so I can use a shell account from my PDA...
I'm not going to have any real computting power on a PDA. Thats not what a PDA is for.
Re:Is this a PDA? (Score:2)
Wrong.
More right than wrong. I know that the PocketPC cannot currently do anything I need that a Palm PDA cannot do.
Once PDAs are available with much much faster processors and tons of RAM, people will find new uses for them. But as things are today, given a choice between the battery life of a Palm or the power of a PocketPC, most people choose the Palm.
Do you fully understand how EVIL you are? People are DYING in hospitals due to medical errors and timing issues that could be essentially eliminated by a sufficiently advanced portable computing system.
Oh, rubbish, and shame on you. I don't believe for a second that PocketPCs (or any other single gadget) can magically solve the problems of hospitals. And I'm dubious about PocketPCs at all in hospitals; they do crash.
You are actively preventing those technologies from being developed as fast as they otherwise would be.
Wow, he sure has a lot of power to affect technological development in the world. That or else you are being insanely over-the-top.
steveha
Re:Is this a PDA? (Score:2)
If you are trolling, then grow up and go do something else.
If you are not trolling, I suggest you take a course in how to effectively communicate your ideas without being a jerk.
I figure you have to be a troll; is anyone this abrasive and annoying without working at it?
Have a nice day.
steveha
It's All a Question of Cache (Score:2, Informative)
I really doubt that (Score:1)
It's usually pretty hard to thrash a code cache to the point of it being the bottleneck. You pretty much have to deliberately write code to do that. For reference, an Athlon has a 64KB code cache, and that's running at a far higher speed than both of these ARM processors. Your figure of 50-100 million ops/second assumes an unreasonable 100% instruction cache miss rate. You'd have to have a program totally devoid of loops to achieve that. At a still unreasonable hit rate of 95% you'd still get 96% ((95*400+5*100)/40000) of full performance.
My guess why the real world performance is so bad is probably Microsoft's lack of optimization specific to the processor. There's a few trade-offs Intel have made to get the clock higher, including:
I certainly wouldn't expect to be able to take code targetted for StrongARM and see all of the performance increase that 200->400 would indicate. I can imagine hand coding assembler to work around the latencies and getting near to 100% performance. It's not that hard for a modern compiler to work this out either - a current day x86 is far more difficult to target than an X-Scale.
Before blaming Intel... (Score:2, Interesting)
Obviously, they felt that the majority of their customers would want an Arm5 based device. Wait a few months, and you might see some pretty impressive cell phones or linux based devices that use Arm5.
The complaint against Intel is only legitimate if their Arm5 scores are terrible. Otherwise it is the fault of the device maker for using a chip that doesn't perform well for the task at hand, or MS for not optimising.
ARMv5 versus ARMv4 and why Intel sucks (Score:5, Insightful)
and CPU specific optimizations. The ARMv5 instruction set is a
relatively minor architectural tweak to the ARMv4 instruction set.
The names give you the impression that it's some grand change between
v4 and v5, if a technical guy did the naming it would be ARMv4 and
ARMv4.01. ARM is playing some games with architecture naming
to protect their business position with patents in a silly way.
ARMv5 adds a couple of new instructions over v4, an instruction to count
leading zeros in a register (which a compiler would likely never
use), and a better method of switching between the ARM instruction
set and the 16-bit Thumb instruction set. The later isn't
relevant for PocketPC since Thumb mode isn't supported. I think
v5 might having a new debugging hook as well.
The new XScale parts are ARMv5te, the T is for the 16-bit Thumb
instruction set, which no one seems to care about. The "E" adds
some DSP oriented instructions that are pretty interesting for
media codecs and such. They are the MMX equivalent for the ARM
world. They likely won't improve performance of the general
purpose aspects of the platform.
I think it's a red herring to chase Microsoft for not optimizing for
the ARMv5, the changes are really small and I don't see any
performance impact, certainly not if you have to maintain another
version for all of the strongARM based products.
Now, as far as CPU specific optimizations for the PXA250 (XScale)
implementation of the ARM architecture. IMHO Intel chased
MHz and left behind a lot of good sense about system performance.
The high order bit is bus performance as others have already
pointed out.
In addition to the bus performance, Intel made many tradeoffs
to optimize for clock speed: The 7-stage pipe has a 4-clock penalty
for a mis-predicted branch. This is compared to the circuit
design heroics in the strongARM that implements "all branches
are 2-cycles". The Xscale approach is much more complicated, it
probably doesn't perform any better, but you get a high clock speed.
Intel adds clock cycles to all load/store-multiple instructions
in Xscale. This is a pretty big deal in ARM since they are
used in the entry and exit of most C functions, in memcpy(),
and any time you are moving chunks bigger than a register.
The load-use penalty is bigger in Xscale. This is a pretty big
deal in ARM. The ARM instruction set is pretty compact. It is a
RISC processor, but the combination of shifting operations
combined with ALU operations makes it possible for a good compiler
to generate reasonably compact code. As a result, it's harder
for a compiler to put instructions between a load and instructions
that use the destination of the load. This is another trade-off
in Xscale that allows a higher clock speed but hurts performance
otherwise.
I go on too long, but the DEC designed strongARM used in the SA1100
is a tour-de-force of clean implementation and balanced system
performance. It's amazing that core was designed in 1993 (I think,
someone please correct me) and is still the leader for handheld
apps. The Intel guys went after clock speed at the expense of
everything else in Xscale and it will probably never optimize well
for a platform like PocketPC.
jeff
Re:ARMv5 versus ARMv4 and why Intel sucks (Score:2, Informative)
Somebody, please mod this up because jeff is right damn it!!!
I've worked with both the SA-11x0 (StrongARM) and the PXA250 "Cotulla" (Xscale) CPUs and everything jeff says is pretty much on the money (except the CLZ instruction is far from useless, it's *awesome* for fixed-point logarithms, dude).
Also, the DSP coprocessor in the X-scale is about as useful as tits on a bull for codecs with 16-bit data streams. You spend so many clocks marshalling data around to get it in and out of the thing that it's *much* more efficient to use the MAC instructions native to ARM v4 on normal registers! Even the Intel engineers who put together their IPP's [intel.com] have avoided the DSP coprocessor since it provides no real advantage.
It's pretty clear to me the v4/v5 thing is a red herring. Let's face it DEC was much better at putting out a general purspose ARM-based CPU than Intel.
Re:ARMv5 versus ARMv4 and why Intel sucks (Score:2)
Re:ARMv5 versus ARMv4 and why Intel sucks (Score:1)
Battery life (Score:2)
We are aware that PXA250 (XScale)-based devices are not demonstrating the huge performance gains that were anticipated. That said, Pocket PCs continue to offer the best performance and the richest functionality vs. other handhelds on the market today.
Translation: We know your new car only goes 40mph instead of the 65mph you old car did, but it beats a bicycle, doesn't it? (credits to Jim S for that one).
Even better:
I think the market expectation of what performance on a 400 MHz processor vs. 206 MHz processor has been unreasonable.
Not at all. The process is almost twice as fast, I don't think it is utterly unreasonable to expect the product to be at least one and a half times faster.
But my question is, how is the battery life on one of these things? If it really is the 12-16 hours instead of the 8 currently then the XScale is still a worthwhile bet.