Linus Torvalds Discusses Intel and AMD's New Proposals for Interrupt/Exception Handling (linuxreviews.org) 149
"AMD and Intel have both proposed better ways of doing interrupt and exception handling the last few months," reports LinuxReviews.org.
Then they share this analysis Linus Torvalds posted on the Real World Technologies forum: "The AMD version is essentially "Fix known bugs in the exception handling definition".
The Intel version is basically "Yeah, the protected mode 80286 exception handling was bad, then 386 made it odder with the 32-bit extensions, and then syscall/sysenter made everything worse, and then the x86-64 extensions introduced even more problems. So let's add a mode bit where all the crap goes away".
In contrast, the AMD one is basically a minimal effort to fix actual fundamental problems with all that legacy-induced crap that are nasty to work around and that have caused issues...
Both are valid on their own, and they are actually fairly independent. Honestly, the AMD paper looks like a quick "we haven't even finished thinking all the details through, but we know these parts were broken, so we might as well release this".
I don't know how long it has been brewing, but judging by the "TBD" things in that paper, I think it's a "early rough draft"."
In the article (shared by long-time Slashdot reader xiando), LinuxReviews.org summarizes the state of the conversation today: Torvalds went on to say that while AMD's proposed "quick fix" would be easier to implement for him and others operating system vendors, it's not ideal in the long run. Intel's proposal throws the entire existing interrupt descriptor table (IDT) delivery system under the bus so it can be replaced with what they call a new "FRED event delivery" system. Torvalds believes this is a better long-term solution...
While the pros and cons of Intel and AMD's respective proposals for interrupt and event handling in future processors are worthy of discussion, it's in reality mostly up to Intel. They are the bigger and more powerful corporation. It is more likely than not that future processors from Intel will use their proposed Flexible Return and Event Delivery system. Their next generation processors won't, it will take years not months before consumer CPUs have the FRED technology. Remember, the above-mentioned technical document was published earlier this month [in March]. Things do not magically go from the drawing-board to store-shelves overnight.
Intel isn't going to just hand the FRED technology over to AMD and help them implement it. We will likely see both move forward with their own proposals. Intel will have FRED and AMD will have Supervisor Entry Extensions until AMD, inevitably, adopts FRED or some form of it years down the line.
They also note that Torvalds took issue with a poster arguing that microkernels are more secure than monolithic kernels like Linux. "Bah, you're just parroting the usual party line that had absolutely no basis in reality and when you look into the details, doesn't actually hold up.
It's all theory and handwaving and just repeating the same old FUD that was never actually really relevant."
Then they share this analysis Linus Torvalds posted on the Real World Technologies forum: "The AMD version is essentially "Fix known bugs in the exception handling definition".
The Intel version is basically "Yeah, the protected mode 80286 exception handling was bad, then 386 made it odder with the 32-bit extensions, and then syscall/sysenter made everything worse, and then the x86-64 extensions introduced even more problems. So let's add a mode bit where all the crap goes away".
In contrast, the AMD one is basically a minimal effort to fix actual fundamental problems with all that legacy-induced crap that are nasty to work around and that have caused issues...
Both are valid on their own, and they are actually fairly independent. Honestly, the AMD paper looks like a quick "we haven't even finished thinking all the details through, but we know these parts were broken, so we might as well release this".
I don't know how long it has been brewing, but judging by the "TBD" things in that paper, I think it's a "early rough draft"."
In the article (shared by long-time Slashdot reader xiando), LinuxReviews.org summarizes the state of the conversation today: Torvalds went on to say that while AMD's proposed "quick fix" would be easier to implement for him and others operating system vendors, it's not ideal in the long run. Intel's proposal throws the entire existing interrupt descriptor table (IDT) delivery system under the bus so it can be replaced with what they call a new "FRED event delivery" system. Torvalds believes this is a better long-term solution...
While the pros and cons of Intel and AMD's respective proposals for interrupt and event handling in future processors are worthy of discussion, it's in reality mostly up to Intel. They are the bigger and more powerful corporation. It is more likely than not that future processors from Intel will use their proposed Flexible Return and Event Delivery system. Their next generation processors won't, it will take years not months before consumer CPUs have the FRED technology. Remember, the above-mentioned technical document was published earlier this month [in March]. Things do not magically go from the drawing-board to store-shelves overnight.
Intel isn't going to just hand the FRED technology over to AMD and help them implement it. We will likely see both move forward with their own proposals. Intel will have FRED and AMD will have Supervisor Entry Extensions until AMD, inevitably, adopts FRED or some form of it years down the line.
They also note that Torvalds took issue with a poster arguing that microkernels are more secure than monolithic kernels like Linux. "Bah, you're just parroting the usual party line that had absolutely no basis in reality and when you look into the details, doesn't actually hold up.
It's all theory and handwaving and just repeating the same old FUD that was never actually really relevant."
FRED (Score:3, Funny)
Re: (Score:2)
Cuppa?
Re: (Score:2)
Bah, you're just parroting the usual party line that had absolutely no basis in reality and when you look into the details, doesn't actually hold up.
In other words, your whole argument hinges on unreality. In a world that doesn't exist, and no reasonable person expects to exist or even has a realistic plan for, in any kind of reasonable timeframe, your argument simply makes absolutely zero sense at all.
The name of the forum is "real world tech". Not "my private fantasy tech"."
Ah, Happy Fun Linus is back :-).
Intel? Bigger and more powerful? (Score:2)
Haven't they read the news these past years?
That's like saying that huge dinosaur that got its head bitten off by the smaller dinosaur is bigger and more powerful because it's still in motion and warm. ;)
Sure, it's Intel. They might grow a new head over time. But things still have changed quite a bit.
Also, Intel is mostly bigger because they still make (most) of their own chips, and because of monpolism. The former doesn't count, as by that logic, you could add GlobalFoundries and part of TSMC to AMD, and t
Re: (Score:3)
I am huge amd fanboi, but you do know intel a gazillion chips in markets where amd is still hoping to get higher single digit marketshare
Re:Intel? Bigger and more powerful? (Score:4, Funny)
AMD has 100% marketshare of Ryzen CPUs.
Re: (Score:2)
So it's okay to check profits and bank accounts when it's about Intel, but not okay when it's about Apple. Slashdotters have weird double-standards.
Re: (Score:3)
Sounds like AMD is doing pretty good.
Re: (Score:2)
So congratulations at clutching at straws and failing at math I guess
It says, as it fails at reading.
Re: (Score:2)
I'm personally a bit shocked that AMD hasn't seen a major uptick in the DC.
Just last week, I was working on a new deployment for a Fortune 500 company. Still Intels.
Part of me wonders if it is about trust.
We had too many problems with our AthlonMPs, and later Opterons. The Intels were simply more stable and less picky.
I would love to play around with some new high-core-count AMD stuff, but I'm also not sure I'm ready to put one in production after all the work it was to get shit of
Re: (Score:2)
"I haven't seen it personally, so it's UNPOSSIBLE!!!"
Didn't say it wasn't possible. Said I didn't believe him.
AMD's DC market share almost doubled last year.
When you have a dollar, it's quite easy to double your money. When you have a million, well that's a different story, now isn't it?
Point being, AMD doubled their market share... and is still in the single digits.
That means for every DC server purchased, 9 were intel, and 1 was AMD. Even after their doubling.
Maybe pull your head out of your ass one in a while, and you might notice these things.
Maybe go to school, and you could be a better shill.
Re: (Score:2)
You can always tell when someone has lost an argument. They resort to playing games of semantics.
Pointing out your strawman isn't semantics.
No, that wasn't your point. Here's what you said, you dumbfuck:
Yes. I did say that, and I'm correct.
DOUBLING your market share in 1 fucking year most definitely constitutes a "major uptick" by anyone's definition.
Only if you're comparing to yourself. Compared to the market as a whole, then no, it does not constitute any such thing.
HAHAHAHA!!! Let's see here..."The Intels were simply more stable and less picky." Dude, if anyone's a shill here, it's YOU.
No shill. Just a professional in the field. Being the market reflects my opinion, maybe you're the one that's out of touch?
Christ, what a fucking moron. A moron who doesn't mind repeatedly proving he's a moron on a public forum.
Doubtful. Troll harder, kiddo.
FRED is overloaded (Score:3)
FRED already refers to a whole bunch of stuff, least relevantly the freespace mission editor and most relevantly the Fucking Rear End Device that replaced cabooses.
x86 is a dead horse (Score:2)
Re:x86 is a dead horse (Score:4, Informative)
Sadly it's the best horse we have.
The tie with the IBM PC compatible platform guarantees that the bootloaders, standard devices etc.. are all in the same place, instead of the DRM hell that is the ARM devives, where they can make every system boot and behave a different way, making efforts like linux impossible on em.
Re: (Score:2)
Re: (Score:2)
Re: (Score:3)
I think that would be Z80a's point, that ARM has more incompatible implementations because it provides licensees so much freedom. This isn't such a problem for applications on ARM as it is for packaging a distribution.
It has historically been hard to just have a generic linux distribution that worked on 'arbitrary' ARM device.
Now ARM systems that aarch64 with UEFI are getting to be pretty comparable to the x86 ecosystem to not require so much fiddling in the server space and can credibly be supported with g
Re: (Score:2)
Not UEFI, its fucking awful, IEEE 1275-1994 would be wildy superior.
Re: (Score:2)
It has historically been hard to just have a generic linux distribution that worked on 'arbitrary' ARM device.
Considering the wide range of ARM devices for a wide range of uses, it would be comparable to x86 devices if you include 286 to modern day x86-64 Core i9s. There was never one generic Linux distribution that works on that entire lineage.
Re: (Score:2)
It has historically been hard to just have a generic linux distribution that worked on 'arbitrary' ARM device.
Considering the wide range of ARM devices for a wide range of uses, it would be comparable to x86 devices if you include 286 to modern day x86-64 Core i9s. There was never one generic Linux distribution that works on that entire lineage.
I think that's a bit excessively hyperbolic. Linux doesn't run on the wide range of ARM devices, even with specific distributions, because a lot of them aren't powerful enough. For example, there's lots of ARM cores in the Arduino-level space, in the double-digit MHz speeds. You might compare one of those to your example 286, and they are many times more powerful than that chip so the comparison is reasonably apt.
Re: (Score:2)
It's not analogous.
Even today, there's no such thing as standardizing on Arm, even though they *are* getting better - you write code for a specific SoC.
MMU and interrupt handling are at least in the specs, now.
Re: (Score:3)
It has everything to do with ARM because anyone building a system with ARM is free to use whatever proprietary BS they want to boot it.
x86 means BIOS/UEFI. GP is exactly right, in a practical sense.
Re: (Score:2)
It has everything to do with ARM because anyone building a system with ARM is free to use whatever proprietary BS they want to boot it.
Bahahahaha. Have you every heard of Intel's EFI? You know Intel's EFI proprietary bootloader? You and he both seem to ignore that x86 can also have proprietary BS too.
x86 means BIOS/UEFI. GP is exactly right, in a practical sense.
No it does not. See Intel EFI [wikipedia.org]: "Intel's implementation of EFI is the Intel Platform Innovation Framework, codenamed Tiano. Tiano runs on Intel's XScale, Itanium, x86-32 and x86-64 processors, and is proprietary software, although a portion of the code has been released under the BSD license . . ."
Re: (Score:2)
They have to pass an Arm compatibility suite for the architecture licensed.
However- that doesn't include the boot process, so have fun, boys.
Re: (Score:2)
Uhhh, as a firmware engineer I can't comprehend WTF you're even talking about.
I assume you're talking about cell phones or something? Rather than processors? You know that Android is Linux, right?
Re: (Score:3)
I'm talking about how x86 is specifically tied to the IBM PC compatible platform that is quite known and standardized, while ARM is not tied to any platform in special as Android, MacOS etc can be rebuilt by the manufacturers.
Re: (Score:2)
If you buy a random x86 system you can be almost guaranteed that there is an UEFI firmware with reasonably well supported ACPI. This is true even if you buy extremely low end implementations (Raspberry Pi-style and below) or extremely expensive ones (Juniper MX routing engine). There is a good chance that an unmodified Linux distribution will boot on them, and if it doesn't, the changes required are usually tiny. Drivers can be hit and miss, but SSD and screen and keyboard and so on will work, perhaps with
Most CPUs don't exist? (Score:2)
> ARM devives, where they can make every system boot and behave a different way, making efforts like linux impossible on em.
The majority of the CPUs in existence are ARM chips running Linux.
Re: (Score:3)
A proprietary binary blob created by the corporation that manufactured the phone. Running your own build on it is a chore if not nearly impossible.
Here's the full source. It's GPL, after all (Score:2)
Linux is GPL. Which means that manufacturers such as LG *have* to provide the full source and makefiles for the exact version they put on the phone, including any modifications they make.
For the example of LG, you can put in any model number here and download the full source, then build.
http://opensource.lge.com/osSc... [lge.com]
Very precisely the opposite of a "proprietary binary blob".
Re: (Score:2)
instead of the DRM hell that is the ARM devives, where they can make every system boot and behave a different way, making efforts like linux impossible on em.
What are you talking about? Raspberry Pi, Android are just two examples or ARM on Linux. Just because an implementation on ARM can have DRM does not mean it must have DRM.
Re: (Score:3)
You can run linux on ARM of course, it is even what android basically is, the difference is that you're vendor locked unlike with the PC platform.
You can't make a linux iso that works with all the android devices for example because there's no standard platform tied to the CPU architecture.
Re: (Score:2)
You can run linux on ARM of course, it is even what android basically is, the difference is that you're vendor locked unlike with the PC platform.
Again, what the hell are you talking about? Please tell me how I am vendor locked on a Raspberry Pi. Or on my ARM based home router. On Android you can install Linux on Android [tuxphones.com] with Ubuntu being the most popular.
You can't make a linux iso that works with all the android devices for example because there's no standard platform tied to the CPU architecture.
Errr what? I have no idea what you are saying. No version of Linux ISO works on every x86 device. No version of Linux ISO works on [insert platform here] including MIPS, Power, ARM whatever. Your complaint belies a fundamental misunderstanding of Linux.
Re: (Score:3)
Well, your Raspbian image isn't going to install very well on a Mediatek based platform. Going a bit upscale to UEFI equipped aarch64 servers, and now you are talking about an area where distributions do tend to be pretty viable across multiple vendors. I do think ARM is well on its way for the norm to be an x86-like ecosystem of well defined standards being the norm to enable cross-vendor compatibility, but there's a lot of lingering of the bespoke incompatible solutions. One brand new ARM device I have w
Re: (Score:2)
Well, your Raspbian image isn't going to install very well on a Mediatek based platform.
And I never said that it would. I am merely contesting his point that somehow my Raspberry Pi is vendor locked.
. I do think ARM is well on its way for the norm to be an x86-like ecosystem of well defined standards being the norm to enable cross-vendor compatibility
Considering the wide range of capabilities of stock ARM cores, that is a hard ask as ARM chips run ultra lower power computing devices like watches to advanced smart phones. Some vendors still use 32 bit processors because it suits their needs.
ISO images from all major distributions are able to pretty much work on any x86 based system commonly sold in the last 15 years
ANY? I would bet you a x86 distro does not work on a x86-64 machine and vice versa. 15 years ago, Intel just released their Core 2 microarchitecture which wa
Re: (Score:2)
I said 15 years ago to specifically catch the era where the vast majority of chips would be x86_64 capable, and the modern distros frequently are x86_64 focused for Intel. Yes, you can find pretty tortured examples to defy it (a brand new 32-bit pentium clone that is embedded and usually runs freedos), but the general purpose x86 platforms don't require the user to give a second thought as to whether the OS installation image they have will work or not.
Sure, ARM is seen in microcontrollers, but even ignori
Re: (Score:2)
An x86 distro will work fine on an x86_64, they are all back compatible and can run x86 code.
An x86_64 distro won't work on an x86.
Of course this is ignoring specific driver issues. Newer equipment may require a newer kernel for driver support.
Re: (Score:2)
Re: (Score:2)
He's talking about the fact that while you can make a boot image for a Samsung G-500 Supergood Phone Plus G5, you can't make a boot image that'll work on that, and all the other Samsung phones, and the Raspberry Pi.
Considering that the two devices use vastly different CPU and hardware capabilities, why would anyone assume you could us the same boot image. Even if we talk about just the Raspberry Pi, early generations used 32-bit ARM processors.
Whereas you can make an ix86/amd64 image that'll boot on virtually all modern PCs, whether made by Dell or HP, whether they contain AMD or Intel chips, whether they're in a mini PC style form factor or a full tower, or a laptop, and so on, as long as that image includes the relevant drivers. This is achieved via standardized firmware (including a standardized booting mechanism) and standardized hardware allowing standardized methods for probing the available devices.
But you are missing that the x86's last major advance was x86-64 and that required maintaining 2 versions for years because consumers could have 32-bit or 64-bit processors. Today vendors can still use 32-bit ARM processors in their devices whereas AMD and Intel do not make 32 b
Re: (Score:2)
What are you talking about? Raspberry Pi, Android are just two examples or ARM on Linux
Actually, they're examples of Linux on ARM.
Raspi still has binary blobs and Android devices pretty much all have closed drivers, however, so they are still examples of platforms which are not fully open.
Re: (Score:2)
But there are significant differences still. Enabling A20, an operation dating back to the 80s, can be done by one of 4 different incompatible methods. And you don't know which one would work. So you have to try them all.
Then t
Re: (Score:2)
So, is it a dead horse or a milking cow?
Re:x86 is a dead horse (Score:4)
The death of x86 has been greatly exaggerated.
Basically people have got awfully excited that the M1 is competitive with x86 in one niche (mid range laptops, essentially). If you want something faster for general computing, x86 is still your best bet.
Re: (Score:2)
Basically people have got awfully excited that the M1 is competitive with x86 in one niche (mid range laptops, essentially). If you want something faster for general computing, x86 is still your best bet.
Many, many reviewers of the M1 Macs have said that the M1 is very competitive for general everyday computing. It is not competitive for niche uses like video productivity in some cases. In some cases, like h264/h265 encoding it actually beat older Intel Macs.
Re: (Score:2)
Well, a mid range laptop is ok for everyday computing, so I guess we agree?
Re: (Score:2)
Re: (Score:2)
Well it's use in low end laptops is completely unproven. Apple don't sell low end laptops, and the chip uses HBM which is pretty pricey stuff, but offers higher bandwidth and much lower power draw then DDR4.
And yes I forgot about low end desktops. Good point.
Re: (Score:2)
It's a Gucci chromebook.
Re: (Score:2)
Anything with 16GB is a low end laptop, sorry.
No. This is (a) snobbery and (b) unrealistic. You can still buy laptops with 4 and 8 G, and 64 is about the max you get in anything less than a mobile workstation (I hesitate to call those beasts laptops unless you own asbestos underpants). If you care about mobility, then 16G is the max you can get in a lightweight laptop, even with a good CPU.
A 16G laptop is fine for even mid sized dataset analysis, hence mid range.
Re: (Score:2)
The "ranges" are entirely subjective.
You can still buy laptops with 4 and 8 G, and 64 is about the max you get in anything less than a mobile workstation
I hesitate to call something with 4 and 8GB a laptop. That's an iPad with a keyboard.
As for mobility... 64GB (and 128GB) laptops are quite mobile. Sure, they're heavy. But tools generally are, no?
(I hesitate to call those beasts laptops unless you own asbestos underpants)
Quite right. I wouldn't put any of my beefier laptops on my lap for any kind of real use. They're too heavy.
Heat actually isn't that big of a problem. They generally have very adequate airflow, so the chassis itself doesn't get that hot. The person or
Re: (Score:2)
Of course it's snobbery.
The "ranges" are entirely subjective.
Snobbery is stupid. It sounds like you're defining midrange as capable of "real work" where you alone define what "real work" is and anything less memory intensive isn't "real work". This is also completely blinding you to the idea that people have other usecases to you.
I hesitate to call something with 4 and 8GB a laptop. That's an iPad with a keyboard.
A low spec laptop is still a laptop. Can still run arbitrary code, run whatever OS you can inst
Re: (Score:2)
Snobbery is stupid. It sounds like you're defining midrange as capable of "real work" where you alone define what "real work" is and anything less memory intensive isn't "real work". This is also completely blinding you to the idea that people have other usecases to you.
While it sounds to me like you're defining "midrange" as capable of your "real work" (what was that about analyzing "mid sized" data sets?) ;)
It's not blinding, it was me pointing out that what you define as mid range is entirely subject to you.
It looks like we agree here
A low spec laptop is still a laptop. Can still run arbitrary code, run whatever OS you can install etc. You can easily compile code on an 8G laptop, etc. You benefit from lots of RAM if you're working with large codebases, but not everyone is.
It is still a laptop. And so is a chromebook.
Portable. n. has wheels and not technically bolted to the floor. Also, see crowbar.
Unsure what you're trying to say. That a laptop that may weigh 2-3kg more than a lightweight laptop is only mobile out of technicality?
lolwtf no. Tools have a purpose, and bigger and heavier is not necessarily better.
No one said better. You inserted that word entirely of you
Re: (Score:2)
No one said better.
You are literally saying that below:
Sure, but would you be annoyed if your 2kg hickory handled club hammer was 5kg, and capable of much more?
This definitely qualifies as "not even wrong". The 5kg sledge isn't capable of more, it's capable of different. There is very little overlap between the tasks I would accomplish with a 5kg long handled sledge and a 2kg short handled club hammer. No one even makes such a thing; I've never seen a club hammer over 2 kilos.
So no: I would be very annoyed
Re: (Score:2)
You are literally saying that below:
Only if we're using the millennial definition of literally.
I literally said heavier was more capable. Is that not the case?
Better is putting words in my mouth.
More capable isn't remotely synonymous with better. I'll trust I don't need to provide examples to demonstrate that logic.
This definitely qualifies as "not even wrong". The 5kg sledge isn't capable of more, it's capable of different. There is very little overlap between the tasks I would accomplish with a 5kg long handled sledge and a 2kg short handled club hammer. No one even makes such a thing; I've never seen a club hammer over 2 kilos.
You missed the point of the hypothetical.
It wasn't a matter of what actually exists.
Ultimately, a hammer is a hammer, which makes it a terrible analogue for a computer- and that's on you. Perhaps it's partially my fault for in
Re: (Score:2)
I literally said heavier was more capable. Is that not the case?
I don't know what misunderstanding about tools would lead you to believe that heavier hammers are "more capable".
Tell you what, since you don't believe me try the following:
1. Buy a 5 kilo sledge
2. Attempt to put in a panel pin
A heavier sledge is not "more capable" because there are jobs that are essentially impossible due to its size and weight. Not to mention the steel head is completely unsuitable for many tasks and would in fact completely
Re: (Score:2)
I don't know what misunderstanding about tools would lead you to believe that heavier hammers are "more capable".
You conveniently skipped over:
You missed the point of the hypothetical.
It wasn't a matter of what actually exists.
Ultimately, a hammer is a hammer, which makes it a terrible analogue for a computer- and that's on you. Perhaps it's partially my fault for indulging your bad analogy.
I suspect it was no accident. You love beating on dead horses.
No, it is not the case.
Except it is, in the context with which we were speaking.
Right: me providing examples of how a 5kg sledge is not the most capable hammer in the world is not a straw man. No one even makes 5kg single hand hammers because such a device is completely useless, not "more capable" as you keep insisting.
Sure it is. It's an attempt at building up an argument you can win that isn't remotely relevant to the logic train we were actually arguing.
So desperate are you to continue trying to lift this poor man of straw, you won't even acknowledge that I conceded that your analogy was ridiculous, and I was silly for indulging it.
ORLY? Allow me to quote the original thing I was replying to:
Anything with 16GB is a low end laptop, sorry.
That's "anything" not "any mac".
OK, that's fair. Rather, I should have s
Re: (Score:2)
So yes, your $1500 16GB "ultrapremium" laptop is a Gucci Chromebook.
In addition to tools (specifically hammers) it appears you also have no grasp of fashion or chromebooks.
One wears Gucci because (charitably) you want to look nice and (less charitably) because you want others to know you wear Gucci. Reality for any one person will lie between those extremes. Lenovo isn't a fashion brand: you don't buy a Lenovo because you want to show off. I mean you might, but it'd be a pretty odd choice unless you wanted
Re: (Score:2)
Re: (Score:2)
New set of bugs (Score:2)
Re: (Score:2)
This is what undermines the "Intel is big, they can do whatever they want." AMD is expected to go from 20% server market share by the end of the this year. By the time this is in products, Intel is going to have a hard time forcing anybody to do anything. They'll have to convince people to support it, and if they try to make it proprietary, that's going to be harder to do.
Re: (Score:3)
Maybe Itanium IA-64 would have been better (Score:2)
Re: (Score:2)
You may want to give the ATtiny84 a try.
Re: (Score:2)
Itanium did have a backwards compatibility subsystem, but in the first 733MHz 64 bit chips was equivalent to 100 MHz Pentium. One of many things on the long list of failures and poor performance that doomed Itanium to be killed by the x86-64 chips of Intel and AMD
Re: (Score:2)
I always wondered who bought Itanium based systems. The only reason seems like you needed OpenVMS on modern hardware. I occasionally check eBay because Itanium seems interesting from a collectors standpoint but the prices are insane. Wonder how many Itanium boxes are still out there running.
Re: (Score:2)
> the prices are insane. Wonder how many Itanium boxes are still out there running.
Legit, some people threw their Itantic boxes off the roofs of their office building - the supply must have rapidly diminished.. There must be an old story on here about it.
They say it was epic.
Re: (Score:2)
A new cpu that eliminated all the backward comparability may have been a better choice. Never actually used one, but who knows.
The Itanium VLIW architecture was dependent upon a magic compiler to produce good performance, and Intel was never able to produce a compiler which produced performance in line with their promises. There was a lot of hope that they would manage it since their x86 compiler was so very good at optimization, but it never happened and Itanic died.
Writable Code Segments More Egregious (Score:2)
Isn't Intel the root cause of the problems? (Score:2)
Why should we believe Intel's "solution" will be any better?
Besides, 10 years from now - I'm betting/hoping Intel is (at most) just a fab provider, and no one will be running Intel's architecture anymore.
What it all means... (Score:2)
Re: (Score:2)
I guess, you can expect people to buy a new phone to run a new version of the OS, but I doubt people would want to buy a new $10k server just to run a new version of the OS.
Which means that now the OS and software will have to support lots of different CPUs that are incompatible with one another.
At least with x86, I can run a new version of Linux on an old device or, if I need to, an old version of Linux on a new device.
Re: (Score:2)
Or I use Linux and other open source software because I don't need to pay for it.No contracts, sometimes buying used hardware etc.
Being able torun the latest distro on a 10 year old CPU is great. Not everything requires the performance of a brand new server and a used one that is good enough for the task costs a few times less.
Re: (Score:3, Interesting)
Meh, ARM deprecates stuff and changes things when needed. They have enough legacy crap to deal with too, especially the high performance "desktop class" models like the M1.
Re: (Score:2)
ARM is only 10 years old. Repairing structural damage is much easier in infancy than after, say, puberty.
Re:Compat (Score:5, Informative)
ARM dates back to the mid 1980s.
Re: (Score:3)
Thank you for the correction. I saw a reference to the more recent architectures started in 2010 and mistook it for the start of the architecture altogether. Given the longer history, I'll have to take back the "pre-pubescent" crack.
Re: (Score:2)
ARM processors go back before 1987's Acorn Archimedes [wikipedia.org] system. The ARMv8 iteration of the architecture is only 10 years old, but it maintained broad compatibility with older versions of the ISA.
Nope - ARM is 1983 (Score:2)
While ARM first saw wide spread use in 1987 in the overhyped Archimedes it is actually a lot older. First public available systems date back to 1985, first demonstation system back to 1983 and the core concepts are from the late 1970.
And talk about legacy support... ARM at first had put the status bits into the adress register. WTF how stupid was that?
Techicaly ARM and Intel are only seperated by five years at best. One being 43 years in use the other 38 years. Not much of a difference.
Re: (Score:2)
So the difference between 1987 and late 70's is "a lot older" but the difference between 43 years and 38 years is "Not much of a difference"? 8^)
Compat-price of change. (Score:3)
Which is more expensive? Backwards compatibility, or paying to rewrite everything? Remember the bill is coming out of your pocket so think carefully.
Re: (Score:3)
If your engineering department or ISV tells you that a CPU architecture change means you need to rewrite everything, I think you need to reevaluate what manure exactly they are feeding you.
That's not to say that migrations don't have costs. I've done a few 32-to-64 migrations, and a handful of CPU arch and one project that moved off gcc-based tooling to clang/l
Re:Compat (Score:5, Funny)
Never underestimate the value of backwards compatibility.
But as soon as FRED is implemented, then it's discovered that to make it work then there's a functionality that will be named WILMA is also necessary to housekeep what FRED produces and then a messaging pipe system named PEBBLES will be implemented and a priority system named BAMM-BAMM will take care of the messaging queue prioritization.
Re: (Score:2)
And collectively they will reintroduce all the stone age bugs we all know and love.
Re: Compat (Score:2)
Re: Compat (Score:2)
Re: (Score:2)
That's the queue overflow garbage collector.
Re:Compat (Score:4, Interesting)
Off-topic anecdote, but whenever I need a small set of variables or data to test a concept I always start off with Fred, Barney, Betty and Wilma. I'm sure that says something about my age, and it makes me wonder what the go to variables are for different age groups.
Re: Compat (Score:2)
Foo bar baz garpley
Re: (Score:2)
I've seen Rarity, Twilight_Sparkle, Fluttershy, Rainbow_Dash, Applejack and Pinkie_Pie, beat that!
Re:Compat (Score:4, Funny)
One day I'm asked to make a tiny adjustment to a "read-print" reporting program that had been produced for the finance team, written in COBOL-85. I dived directly down to the Procedure Division to get a handle on how the program had been laid out, found a simple bit of business logic and some reasonably sensible "PERFORM...VARYING" structures. Dug in to one of these, only to find, in some pretty weird-looking logic:-
ADD HEINZ_BEANS TO JB GIVING SMELLY_FARTS.
Didn't say anything, found the lines that needed patching; tweaked; compiled; away we go. Later on I found the lady who had most recently compiled that particular program and did a pretty poor and un-subtle job of asking her for her views on Heinz baked beans. Judging by the color she turned, I reckon I found our phantom editor.
People rate more modern languages like JAVA and PHP, but you could always have a lot of fun with COBOL if you thought about what you were doing...
Re: (Score:3)
Never underestimate the value of backwards compatibility.
But as soon as FRED is implemented, then it's discovered that to make it work then there's a functionality that will be named WILMA is also necessary to housekeep what FRED produces and then a messaging pipe system named PEBBLES will be implemented and a priority system named BAMM-BAMM will take care of the messaging queue prioritization.
Apparently there is a new ALU called YABBA-ADDER-TWO.
Re: Compat (Score:4, Interesting)
Re: (Score:3, Interesting)
It reminds me of a little multicore story about microcontrollers, that I'm sure might sound like sour grapes, but only because it is.
Years ago Parallax tossed a new microcontroller architecture out there, one I loved instantly yet very quickly it was obvious I was in the minority.
It was called the Propeller, and was a true 8-core microcontroller with a set of per-core and shared resources designed around this fact.
Notably, the single most complained about fact was the complete lack of hardware interrupt sup
Re: Compat (Score:4, Informative)
The embedded IoT case may be different. But for almost every other computational devices that do multitasking or have OS / userspace separation, there is a need for the OS to be able to kill / pause a userspace process without active consent from that running process itself, unless you want to go back the old days of preemptive multitasking in pre-NT or pre-OSX days, in which any userland programs without administrative right can hang a computer if they want until power reset (Ctrl+C or Ctrl+Alt+Del are interrupts so you don't have them if your architecture don't have interrupts). Such killing / pausing capacity of the OS to userspace processes is by definition an interrupt.
Your dream architecture may skip all hardware input from issuing interrupt signals and ask an OS thread to do constant polling to them. But the software interrupt system is there to stay. Another matter is, if a cpu core is dedicated to do only hardware polling and nothing else, it may be more power efficient and or cost efficient to reduce such core's functionality. Such dedicated core don't need cryptographic capacity, floating point capacity, SIMD / vector capcity, or even integer multiplication / division capacity. And then we may name this dedicated core.... interrupt handler. Meanwhile, all other general purpose cores can spend their useful time not on waiting hardware events but on number crunching, no matter it is for folding proteins or mining bitcoins, until that dedicated core / interrupt handler found new hardware / OS event waiting to be handled.
Re: (Score:3)
The "core" might not even be running.
Take the HLT instruction. For a single purpose embedded system, it is not unreasonable to just halt the CPU and wait for the next event to happen.
https://en.wikipedia.org/wiki/... [wikipedia.org]
Why keep the CPU up and waste power when all can be done at very low power in hardware?
Re: Compat (Score:4, Informative)
That's what Apple have done with ARM... The 64bit arm spec considers backwards compatibility with 32bit ARM an optional feature, and Apple don't implement it on their CPUs.
It's why the current model iphones won't run older 32bit apps, and why you can run a 64bit linux/android/windows vm on the M1 but you can't run 32bit apps inside of it despite those OS having support for doing so. MacOS never existed for 32bit ARM so there is no such software.
Re: (Score:2)
Intel WILL be a dick about it in any way they can get away with.
Intel's chips that support the new behavior won't be available for years, and AMD's fix is probably much easier than Intel's, so it may well make sense to implement both.
Re: (Score:3)
Further, they'll be very weird about it and only have it in some segments of their portfolio but not others.
See also, ECC, AVX512, Itanium...