Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Businesses Apple Hardware

Qualcomm Now Owns Nuvia, Aims New CPU Design Resources Directly At Apple (arstechnica.com) 41

An anonymous reader quotes a report from Ars Technica: Qualcomm has wrapped up its $1.4 billion acquisition of silicon design firm Nuvia, a move that will lead to in-house Qualcomm CPU designs. The acquisition should allow Qualcomm to compete with Apple's silicon division and focus on pushing bigger, better ARM chips into the laptop market. The deal was announced in January 2021. Don't feel bad if you've never heard of Nuvia; the company was only founded in 2019 and has never made a product. Nuvia was focused on building server chips, but Qualcomm seems mainly interested in the engineering pedigree here, since the company was founded by three high-ranking engineers from Apple's silicon division. Nuvia's CEO, Gerard Williams, formerly Apple's chief CPU architect for nearly a decade, is now Qualcomm's SVP of engineering.

Apple is famously in the process of dumping x86 Intel CPUs in order to roll out in-house ARM architecture designs across the company's entire laptop and desktop lines. Qualcomm wants to be here to sell chips to all the PC vendors that want to follow suit. Qualcomm's press release immediately aimed its new design resource at the market Apple is upending, saying, "The first Qualcomm Snapdragon platforms to feature Qualcomm Technologies' new internally designed CPUs are expected to sample in the second half of 2022 and will be designed for high-performance ultraportable laptops." The call-out that this acquisition will lead to "internally designed CPUs" is a big deal, since currently, Qualcomm only ships lightly customized, off-the-shelf ARM CPUs.

This discussion has been archived. No new comments can be posted.

Qualcomm Now Owns Nuvia, Aims New CPU Design Resources Directly At Apple

Comments Filter:
  • ... Apple products will get one cent cheaper from this enhanced support :(
    • Apple is betting consumers will follow them, lemming-like, over the cliff into non-standard hyper-expensive Apple-tax land again.

      How well did this work out for them last time?

      Oh yeah, they came crawling to Intel after acknowledging that the economies of scale applied across the entire PC landscape meant their customized, go-it-alone shitboxes were less powerful, had fewer capabilities and were being outcompeted by the 'inferior' Intel-based PC industry.

      Only an Apple fanboy thinks the M1 is a good idea. I

      • "go-it-alone shitboxes" - hahahahaa, i love that term! My bff is enjoying the M1, after 2 weeks of hardcore hacking to get his dev env up. I guess he is a bit of a fanboy, at that :P
  • Qualcomm is taking aim at Apple's CPU but the problem with this is that Qualcomm doesn't release any information about their chips without an NDA. This isn't a problem for people making hardware but this is a serious problem if they want to actually run Linux on the damn thing. Either they will need to release a sufficient amount of information or they are going to have to submit patches themselves.

    Let's all hope someone at Qualcomm does a hit of acid and suggests making a RISC-V implementation.

    • Qualcomm doing RISC-V couldn't change anything. You'd have a relatively well documented core, still sourounded by a thick blanket of properitary low-disclosure auxiliary processors and interfaces.

      There needs to be a serious culture change before their CPU's are desktop-ready.

  • by monkeyxpress ( 4016725 ) on Wednesday March 17, 2021 @08:36PM (#61170470)

    There must be something else going on here. The Nuvia CEO (Gerard Williams III) is the same guy Apple is suing for stealing its employees to go startup this new company. By buying this business Qualcomm will be getting tangled up in that lawsuit, since it appears that almost all the value in the company comes from the people who left Apple.

    Why would Qualcomm want to get involved in this when has only recently stopped litigating against Apple? Maybe it felt it got a raw deal on the Apple lawsuit (each board might have realised it was only going to enrich lawyers so called time), and is going after making Apple hurt in a different way now.

    Qualcomm is not exactly a slouch when it comes to chip design. I can't imagine Nuvia is sitting on some new discovery that is not encumbered by their fight with Apple, so it's a lot of money to pay for three employees.

    • The Nuvia CEO (Gerard Williams III) is the same guy Apple is suing for stealing its employees to go startup this new company. By buying this business Qualcomm will be getting tangled up in that lawsuit, since it appears that almost all the value in the company comes from the people who left Apple. ...

      Why would Qualcomm want to get involved in this when has only recently stopped litigating against Apple? ... [In the Apple lawsuit] each board might have realised it was only going to enrich lawyers so called t

    • Qualcom hasn't really done any serious u-arch design, and the whole process of getting a core up from scratch typically takes 4-5 years. They can cut 2 off the process by buying in to nuvia. M1 has them scared witless. Another potential legal tussle with Apple be damned, they need a chip to answer and they need it now.

    • by AmiMoJo ( 196126 )

      They probably factored some legal costs into the deal. There is little chance of Apple winning I think, it seems to have been established that people can leave and start their own rival company in California. The decision goes all the way back to the 50s.

  • by Pizza ( 87623 ) on Wednesday March 17, 2021 @08:37PM (#61170474) Homepage Journal

    Qualcomm was all-in on their last acquired server-focused Centriq CPU group... until suddenly they weren't, shuttering the whole thing and laying a lot of them off.

    Why should we expect this to be any different?

    Even putting aside CPU ambitions, Qualcomm's track record with acquisitions tends to be more about taking out competition rather than actually _using_ anything they actually bought..

    • by haunebu ( 16326 )

      Qualcomm doesn't acquire companies they compete with (Huawei, MediaTek, Broadcom). They typically look for companies that complement or round out their product offering. They attempted to buy NXP to get a foothold in the automotive space, but were unsuccessful. They purchased RF360 to add an RF-front end to their 4G and 5G modems. Now they've acquired Nuvia, so they can stop licensing ARM cores and create custom cores (like Apple does).

      It's not about eliminating the competition. It's to be a better competit

      • by _merlin ( 160982 )

        Now they've acquired Nuvia, so they can stop licensing ARM cores and create custom cores (like Apple does).

        They already can design their own custom cores. The Qualcomm Snapdragon SoCs use Krait [wikipedia.org] and Kryo [wikipedia.org] cores, which were in-house designs up to the Kryo 500 series. It's only with the Kryo 600 series announced late last year that they started basing them on licensed ARM Cortex designs. Maybe they want to get back to in-house designs again, but they it isn't new territory for them.

  • by Ostracus ( 1354233 ) on Wednesday March 17, 2021 @08:55PM (#61170536) Journal

    Qualcomm wants to be here to sell chips to all the PC vendors that want to follow suit.

    All those dumping Intel are embracing AMD. Not switching to ARM.

    • All those dumping Intel are embracing AMD. Not switching to ARM.

      Guess Microsoft didn't get that memo for Surface...

      • by Luckyo ( 1726890 )

        You mean those n+1st attempt at making a locked down, "windows store only" windows?

        They're never getting that memo, because they want to have apple's profit margins. But public seems to keep slamming this memo into all of their orifices every time they release another proprietary device with crippled windows.

        • Uh, actually no. What I meant was that Microsoft went with ARM rather than either Intel or AMD for the hardware on the Surface.

          • by Luckyo ( 1726890 )

            Yes, they did. And the reason why they did that is to lock hardware down. Which is why that version of Surface didn't sell.

  • by cas2000 ( 148703 ) on Wednesday March 17, 2021 @10:18PM (#61170796)

    The problem is not the CPUs. ARM CPUs have been fine for years, well over a decade. Reasonably fast, multi-core, low-power. All that is good, and getting better all the time.

    No, the problem is the shitty non-standardised architecture surrounding the CPUs, the "chipsets". Every single fucking ARM device has different, non-standard, deliberately weird hardware (and firmware like boot-loaders) around the CPU.

    And it's not just different manufacturers doing things differently. It's worse than that - different models from the same manufacturers can (and almost always do) have completely different chipsets, with almost no similarity or compatibility between them. And the differences aren't some systematic iterative design or evolutionary improvements, it's all based around whatever is cheap and readily available at this very moment in time.

    Sometimes this is done to force vendor-lockin on consumers, or to "protect" "intellectual property" or other deliberately anti-consumer reasons, but mostly it's done because the entire ARM industry is based around a product life-cycle of 6 months or less - churn out shit as fast as you can, and remember that you're not just competing with other company's products, you're also competing with the team inside your own company designing next quarter's model.

    Until the ARM industry/eco-system solves this major problem, I have no interest in them except as a curiosity that one day might be useful. It would be a mistake for any end-user to treat them as anything more than a curiosity.

    PCs were and still are successful because they've always been based around standardised architecture and firmware, from the original BIOS and AT bus aka ISA bus ("Industry Standard Architecture") to the latest UEFI and PCI-e, all used in extremely similar and above all else compatible ways by different manufacturers. The few manufacturers who did things differently either died out or switched to standards compatibility like all the rest (e.g. HP and Olivetti PCs in the early 80s) - even IBM failed to go against the compatibility standard when they tried to lockin their customers and re-gain control over the industry with their proprietary micro-channel architecture on the PS/2 in 1987. None of the other manufacturers wanted to license IBM's MCA or pay royalties to them, and outside of large corporates already locked in to "nobody gets fired for buying IBM", no users wanted to buy them - even though MCA was superior to ISA. Instead the industry went on to improve the common architecture with EISA in 1988 and PCI in 1992 (and the short-lived VESA Local Bus or VLB, mostly used for video cards).

    • by GrahamJ ( 241784 ) on Wednesday March 17, 2021 @10:50PM (#61170880)

      Agree. This doesnâ(TM)t Apply to Apple since theyâ(TM)re integrated top to bottom but the idea that PCs are just going to switch to ARM and everyone will go about they merry way is very naive.

      Theoretically drivers should abstract away all the chipset differences but Windows has a hard enough time keeping the ship together on fairly standardized machines - thereâ(TM)s no way it will handle potentially hundreds of very different architectures.

      • Hell /. can't even handle escaping without my doing it for it :D

      • by Bert64 ( 520050 )

        The hardware on x86 systems is not really standardised beyond the instruction set and boot process...
        Some hardware may implement bios interfaces like VGA or ATA for basic bootup, but beyond that you'll be using hardware-specific drivers for pretty much everything. You can expect MS to push ARM hardware vendors the same way if they want windows support, implement a common firmware interface for basic bootstrapping and then drivers for everything else.

        On Linux it works differently, because the kernel is open

        • by dryeo ( 100693 )

          Thing is that you can get quite a way using hardware with generic drivers. You don't get all the bells and whistles but you do get basic functionality.
          For example, running OS/2 on modern (decent, some of the cheapest stuff won't work)) hardware. The BIOS or UEFI gets you started, a generic AHCI or NVMe gives plenty good enough hard drive access, though repartitioning from the factory standard is usually required, especially currently for GPT, as OS/2 internally still has lots of 32 bit limitations, so that

          • by Bert64 ( 520050 )

            So many things dont work at all, sound and ethernet still only works if you have drivers ported from linux/bsd and what does work is likely to be very slow relative to what the hardware is capable of, and only usable at all through brute force because your running an os designed to run on hardware from 20+ years ago...

            The generic support for standards like VGA etc is minimal and intended just for bootstrapping.
            You'd get a better experience installing a modern fully supported OS on that hardware, then runnin

            • by dryeo ( 100693 )

              While true that a lot of stuff such as Bluetooth doesn't work at all, the only thing that is slow is the graphics, which are still much faster then old supported hardware or virtulized. Does it matter that the Ethernet drivers are ported? While the stack is running into its limits with GB Ethernet, it does work at about 90% of a newer stack. If you want native sound, you can plug in a USBAudio card, even the newest should work, though for simple stereo sound support the Alsa port is fine.
              Anyways the point i

    • I wouldn't be suprised if POWER came back into style. It doesn't have the front-end limitation of x86, and can be reasonably standardized.

      • I'd say that is unlikely. The last mass produced PowerPC consumer hardware was the Wii U, after which Nintendo went with ARMv8.

        Power workstations aren't cheap if Raptor's Talos II is any guide; I can't see anyone producing OpenPOWER hardware in sufficient quantities that can bring the price down competitive to x86 and ARM.

        • IBM through the OpenPOWER foundation has fully opened the ISA and have promised to license any compliant implementation They released microwatt, A2I and A2O HDL. . LibreSOC is currently working on a POWER implementation featuring a scalable vector proposal, and I find it likely more interest will arise in the next few years.

      • Comment removed based on user account deletion
    • by tlhIngan ( 30335 ) <slashdot.worf@net> on Thursday March 18, 2021 @04:41AM (#61171626)

      The thing is, the PC is one design. Every PC today has a design that dates back to IBM's original 5150 (not even XT). Sure some things have changed, but we're still doing things the exact same way.

      There were non-PC designs as well - they used the same 8086 processor but was designed differently. Some were completely incompatible, while others were "DOS Compatible" in that they could run MS-DOS. But they were not "IBM PC Compatible" because things like video were at the wrong location.

      And yes, I get it - it means every PC on the market today still has the same shitty 1MB hole at the bottom of RAM because at the time it was a good idea to have 640K of RAM and 384K of video. And we're still emulating the A20 gate (complete with 4 different ways of controlling it - 1 of which would work). But it's there because the processor boots at F000:FFF0 (16 bytes off 1MB aka 0x000FFFF0)

      ARM isn't a platform. It's an ISA, and the flexibility is what lets it do a lot of things. It's what lets it scale from itty bitty microcontrollers running in your mouse or keyboard to the big honkin' beasts powering modern laptop computers. The PC platform doesn't scale that well - you can't stick an x86 CPU inside a mouse.

      The flexibility in the memory map is what gives it the scale it has - the M series of processors put RAM and flash at certain locations to make Thumb more code dense so you don't need to specify full 32-bit addresses.

      ARM is that way to be flexible - you can try to define a platform, and many silicon vendors have where they put RAM and ROM and other things at fixed locations so they can reuse as much code as possible. But then you run into space issues. 32-bit ARMs don't have enough address space where you're juggling RAM and PCIe space and peripherals around the memory map because there just isn't enoug. (Qualcomm chips have a 16MB PCIe window, 13MB usable split across 4 BARs).

      There are advantages and disadvantages to both designs. The PC as a platform has its advantages that it's easy to program for since every PC has stuff in the same location, but disadvantages that it's got a lot of cruft in it, from the 1MB hole to all sorts of weird and wonderful ways of working with RAM holes and remaps and other strange things where you either ignore it all, or you have to implement the same function 10 times because over the years there were dozens of different ways to query the same thing.

      ARM means it can scale because you're not stuck with legacy. It just means whoever writes the OS hardware layers has to do a little work. But Linux is great in this area where stuff has been abstracted out such that porting to a new ARM platform is relatively painless. It also means there's no current standard - Qualcomm extended UEFI to ARM platforms and has UEFI loaders, while Apple has their own and others use U-Boot.

      And people did try standardizing - Microsoft did UEFI with Windows for ARM, and Apple might be de-facto for computing uses.

      • by gtall ( 79522 )

        ARM is much more than an ISA, they have designs for memory controllers and other kinds of I/O, buses, etc.

    • by AmiMoJo ( 196126 )

      You are only seeing the consumer end of ARM. On the industrial side 10+ year availability is possible, with 5 years being pretty common and a promise to provide an easy upgrade route at the end of it.

      The situation with chipsets isn't that different to x86 either. To boot your PC there is hidden firmware in the CPU that does the initial boot up before passing to the UEFI firmware on the motherboard. The UEFI firmware has additional binary blobs for the CPU that contain microcode updates some UEFI drivers, an

      • by Luckyo ( 1726890 )

        The reason here would be the insane amount of required work + ARM itself is cripplingly slow. The only reason why ARM is as popular as it is in consumer devices is the plethora of mutually incompatible hardware accelerators bolted on top of the ARM architecture to accelerate specific tasks that makes all of the popular ARM based SOCs. That makes those chips very efficient in those specific tasks, and pathetic at everything else. And it also makes all those ARM chips mutually incompatible, which is one of th

        • by laird ( 2705 )

          You need to read up on the M1. It's a very high performance SoC that excels at general purpose computing. Heck, it even runs x86 executables, including emulation, faster than the Intel chip it replaced. Companies aren't "bolting on external hardware" onto ARM chips, they are embedding the ARM core into chips which also have additional capabilities, such as GPU cores, neural net engines, etc. - no external hardware or PCIe overhead, etc. One of the very nice things about ARM is that the ISA and architecture

          • Independent benchmarkers would like to gladly disagree with you on the M1.
          • by Luckyo ( 1726890 )

            >It's a very high performance SoC that excels at general purpose computing.

            First part is correct, second part is incorrect. M1's huge die size comes from massive amount of various hardware accelerators bolted on top of the ARM core to get close to comparable x64 performance in as many specialized fields as possible. The moment you go off those hardware accelerated areas, M1 goes back to "yeah, this is generic ARM performance". But as long as you stay in the areas where dedicated hardware accelerators do

            • Nope, not even close. It's an 8-wide decode and issue, whereas Rocket lake is only 5-way decode. Both have a unified schedualer (unlike the A76) and a quite large ROB. With a healthy amount of memory bandwidth to boot. And that's before you even consider accelerators. It is a groundbreaking level of cpu performance both for ARM and for it's TDP envelope.

              At 10W it hands down wins single threaded workloads. It's main disability in benchmarks is lack of SMT and only 4 big cores. But if your workload benifits f

"May your future be limited only by your dreams." -- Christa McAuliffe

Working...