Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Hardware Technology

84% of PC Users Unwilling To Pay Extra For AI-enhanced Hardware, Survey Says (videocardz.com) 183

An anonymous reader shares a report: A recent poll on TechPowerUp revealed that an overwhelming majority of PC users are not interested in paying extra for hardware with AI capabilities. According to the survey, 84% of respondents would not spend more for AI features, while only 7% said they would, and 9% were unsure. The poll data was already contributed by over 26K responders. This indicates that despite the PC market's shift toward integrating AI, most enthusiasts remain skeptical of its value. This suggests that hardware companies should pay attention to the preferences of their core user base. Currently, enthusiasts, who no doubt represent the majority of users on TechPowerUP, show little interest in AI features.
This discussion has been archived. No new comments can be posted.

84% of PC Users Unwilling To Pay Extra For AI-enhanced Hardware, Survey Says

Comments Filter:
  • Makes sense (Score:5, Insightful)

    by backslashdot ( 95548 ) on Wednesday July 17, 2024 @12:00PM (#64632881)

    Highly advanced AI is not needed on 90% of PC unless it’s mining your behavior for someone else to use. Your PC already has enough AI power to be useful for personal things (voice recognition, media classification, etc.). Note I said 90%, there are about 10% of users who may need local AI capabilities because they are doing video production or something like that. Though even that’s iffy (why not push that out to the cloud?).

    • Though even that’s iffy (why not push that out to the cloud?).

      The vendors of video production s/ware will want^W force you to use their cloud since they make more money on a subscription model.

    • Note I said 90%, there are about 10% of users who may need local AI capabilities because they are doing video production or something like that. ,

      You always buy hardware for the past or for the future? Just because a few production tools are a big use case for AI doesn't mean it stays like that. There are already many consumer level apps using AI acceleration features. Things as simple as video calls benefit from it for background mic noise reduction just as an example.

      This will be like 3d acceleration is today, where even the 2D desktop uses portions of that hardware. You won't even be aware what will and won't use hardware that becomes ubiquitous.

      • futureproofing is a bad argument. By the time, in the future, when local AI has a killer-app, these now-new 10-40 TOPS ai units will be woefully underpowered. All you need today is a dGPU. A now low-end RTX 3060 apparently has about 100 TOPS, compared to NPU's which max out around 40
    • Isn't voice recognition done in the cloud? Audio sent to server, server sends text back? If that could be done locally with a NPU, it could be a selling argument.
      • I don’t think so. At least on my M2 iPad Air, it works fine even with WiFi turned off.
        We have been doing voice recognition on desktop computers for >25 years now, and the M2 is way faster than the Pentium I had back then. Voice recognition also works a lot better, the Apple Neural Engine almost certainly helps and on other devices Google Tensor, NVidia Cuda etc would also help.

    • This is a GREEN SHIFT (TM) of shifting who pays for computer electricity usage from the 1970s mainframe, to 1990s servers, to 2000s cloud, and now to the desktop.

      Tech cannot call themselves 'carbon neutral' since they shifted considerable processing electricity cost to the end user computer.

  • by Anonymous Coward on Wednesday July 17, 2024 @12:00PM (#64632887)

    Simple question.

    Everything has a GPU. Beyond that, what are we talking about? Just more GPU, or is there actually something meaningful in mind here?

    • by DrMrLordX ( 559371 ) on Wednesday July 17, 2024 @12:09PM (#64632919)

      Usually they're talking about an NPU integrated into the CPU/SoC. Yes, dGPUs typically have more TOPs than an NPU, but there may be some circumstances where you want an NPU on package for lower latency. Plus it's a buzzword thing that certain parties like Qualcomm and Microsoft are pushing on OEMs.

      • by Anonymous Coward on Wednesday July 17, 2024 @12:21PM (#64632973)

        Thanks. Useful answer.

        Now, I go to Wikipedia and read about NPUs: this leads to "AI accelerator". Here, I learn that these devices span a wide spectrum of circuit designs, and that "it is an emerging technology without a dominant design." That latter part, no "dominant design," is a huge red flag. The odds are whatever you buy today is a throwaway: application specific circuits that won't be supported in the near future as the designers muddle through the evolution of new devices.

        This stuff belongs on a plugin device. That is how PCs have always solved this kind of problem: when the "new thing" is too young to solder to the motherboard or integrated into the CPU, we put it on a card or a USB device or some other attachment.

        The push on OEMs to integrate "something" looks like extremely premature optimization: doomed to failure, in other words. Whatever half-baked, throw-away NPU stuff they manage to foist onto people today will be so much dead silicon.

        • by Rei ( 128717 )

          That might be nice if you can fit the whole model onto the accelerator, but if not, you need as high bandwidth with CPU and memory as possible.

          • Because GPUs don't need to move massive amounts of data across the PCI bus already? That's a solved problem already.

            Seriously, if a PC user is saturating PCI-E, then they bought inadequate hardware for the task they're performing and should be using server hardware that sports more PCI-E lanes for more throughput.

        • Just an FYI but most of these NPUs are aimed at INT4/8/16 compute performance. So long as a common programming interface supports them for major AI frameworks, you'll be fine. They'll go obsolete over time the same way any CPU does (it's slow compared to future gens), but it's not like an ASIC that supports only a me algo and risks being totally useless in a few years.

      • by fuzzyfuzzyfungus ( 1223518 ) on Wednesday July 17, 2024 @01:30PM (#64633241) Journal
        I'm unclear on what Microsoft's play is; but they are very specific about it meaning an 'NPU' for the purposes of their 'AI' features(even the ones people got working on NPU-less hardware in pre-release) and 'Copilot Plus' branding/marketing exercise. I'm honestly not quite sure why: so far all the relevant models have been laptop focused heavily integrated designs; so it's CPU, GPU, and NPU living on the same die, contending for the same thermal and power budget, and using the same main memory bandwidth and capacity; but you must have an 'NPU' capable of hitting a specific benchmark to count.

        GPU compute? Doesn't count, regardless of how it scores on the tasks the NPU is dedicated to. CPU instructions aimed at vector operations or the types of floats that the 'AI' types prefer? Doesn't qualify, because reasons. At least publicly, there's not even a requirement that the 'NPU' be better at its job than either CPU or GPU compute(I'd assume, in practice, it's at least not worse, since there's no incentive for the chip vendor to do something less efficient than copy-pasting whatever existing component is closest to being right, presumably either FPU or part of the GPU); if the 'AI' task ends up on the CPU you aren't 'AI-enhanced'; but if firing up the NPU forces the CPU cores to throttle to remain within the TDP target or starves the GPU of memory bandwidth or the like that's not against the rules.

        I'm not sure whether there's actually some good-faith technical reason, and there's some current or near-future workload that was seen as impossible or impractically inefficient without specific new hardware(though, even in that case, it's not clear why that would mean 'NPU' rather than 'has to be able to do X without using more than whatever percent of total CPU or GPU compute resources, however you feel like doing it"), or whether it's mostly about Microsoft wanting to sidestep Nvidia's high ground on GPU compute and Intel's tendency to introduce new CPU instructions by mandating a separately exposed peripheral that answers to their requirements that Qualcomm(and potentially other ARM vendors in the future), AMD, and Intel could all implement; or if there's an even more shameless desire to push some PC refreshes.

        I'm also not sure how well they'll be able to stick to it. I assume that Nvidia is...deeply unimpressed...by the fact that anything without an NPU is not "AI-enhanced" even if it's got a 4090 worth of GPU compute; and all Intel's marketing materials that talk about 'AI' performance(even on client systems; ignoring the stuff aimed at workstation and datacenter) acknowledge that they are including an NPU that ticks Microsoft's box; but then go on to talk up "Platform TOPS" which are the ones provided by everything that isn't the NPU and totally make the system's numbers bigger. That doesn't sound like deep commitment to the concept. The...somewhat rocky...launch of "Copilot Plus" PCs(good work on 'Recall', guys!) may not help; nor will the ongoing ambiguity about whether "AI-enhanced" is supposed to be some sort of binary thing; where the user is supposed to just look for the marketing sticker and buy whatever has it; or whether it's supposed to be an actual performance number where some systems are better than others. If it's supposed to be a binary thing Microsoft has basically committed to not doing anything 'AI-Enhanced'(at least on the client) that won't work on whatever NPU is weakest of the launch generation at least until those systems are EOL or they come up with an even sillier marketing sticker; but silicon vendors and OEMs aren't going to like "they're all basically interchangeable; just buy whatever" since they spend a lot of time on trying to one-up one another on performance.
    • by Junta ( 36770 )

      While there is some technical things, it's largely a branding exercise in practice. A barebones 'NPU' that qualifies as "AI enhanced" is unlikely going to deliver a very noticable different behavior than what's already on the systems.

      To the extent people have been messing with AI, they've been doing so without "AI enhanced", so they understandably will scratch their heads to find out that somehow their current AI excursions have been done without "AI enhancement". Either they've been doing the relatively

      • Cynically, it wouldn't be entirely surprising if one point of the exercise is literally to get impressive-looking Task Manager screenshots for reviews of "Copilot Plus" PCs.

        If you open up the "Performance" tab it shows CPU and GPU utilization and memory capacity usage; but not package power or memory bandwidth usage; so if you fire up some relatively lightweight 'AI' thing, doing background removal on a video call or something, you'll be able to show it eating 25% of the CPU on the old-and-busted laptop
    • by smoot123 ( 1027084 ) on Wednesday July 17, 2024 @12:20PM (#64632963)

      Everything has a GPU. Beyond that, what are we talking about? Just more GPU, or is there actually something meaningful in mind here?

      See this article [forbes.com] at Forbes for a quick tech-light view. Basically you need sufficient CPU and storage. The real gear is a neural processing unit which can do 40 TOPS (where I think an op is a fused multiply-add, probably with 32-bit floats). You also need sufficient video RAM to feed the GPU/NPU. You need something like 2-4 bytes per parameter so a 5 billion parameter model needs 10-20 GB to run locally.

      That's the key part: being able to run a model on the PC rather than in the cloud. I think the promise is you'll get much lower latency responses. We keep changing our minds about "fat clients are great!" and "central server farms are great!" for the last 40 years. I think this is just another oscillation.

      It used to be you needed a super-de-duper PC to edit video, now any phone can do it. In a few years I expect you'll find every PC is capable of running models tuned to run on laptops and it will be no big whoop.

      • You need something like 2-4 bytes per parameter so a 5 billion parameter model needs 10-20 GB to run locally.

        OK, but GPUs with 16GB+ are not exactly scarce now and 12GB or so is extremely common. So basically people with even slightly serious PCs (mine is around $1100 all in and I have a 4060 16GB) already have the hardware to do this. Meanwhile the people who have cheaper ones were already trying to save money, and won't want to spend more on a component they don't need. They also don't have the RAM, budget systems still come with 8GB.

      • by Rei ( 128717 )

        Um, huh? Running FP32 models is rarely ever necessary, and even FP16 is overkill in most situations. I run Mixtral quantized to just 3-4 *bits* per param.

        That said, the better the models get, the harder it gets to quantize them well unless it was done with quantization-aware training. Anyway, we're probably going to jump to ternary eventually... one trit per param.

        But yeah... VRAM is king in AI. And if you can't fit it all onto one GPU/NPU, then its bandwidth with whatever it's sharing with that's the l

      • See this article at Forbes for a quick tech-light view. Basically you need sufficient CPU and storage. The real gear is a neural processing unit which can do 40 TOPS (where I think an op is a fused multiply-add, probably with 32-bit floats). You also need sufficient video RAM to feed the GPU/NPU. You need something like 2-4 bytes per parameter so a 5 billion parameter model needs 10-20 GB to run locally.

        /w LLMs it is something like 5 bits per parameter or less. Anything more than that and you are wasting resources for imperceptible gain. A 5B model should require about 3GB or less.

        While I think NPUs might be really useful for other applications when it comes to LLMs some CPUs can already saturate available bandwidth. Sure NPUs or specialized instructions (e.g. AMX) can do it way cheaper and way lower power but it isn't going to really win much in the way of performance because you will still be limited

    • I take it to mean AI accelerators [wikipedia.org] or NPUs that must be baked into the silicon and the APIs that must be created/maintained to call AI functions. Chip design is always a balance about what is included in the design in terms of need vs cost. For example, a chip for a streaming box may have AV decoding functions in the hardware but not encoding functions. A chip used for a home router may not need any decoding circuitry but may have specialized network routing functions. As with all things, if would cost more
  • by drnb ( 2434720 ) on Wednesday July 17, 2024 @12:03PM (#64632899)

    A recent poll on TechPowerUp revealed that an overwhelming majority of PC users are not interested in paying extra for hardware with AI capabilities

    Much like the majority declined to pay extra for a FPU back in the day. Then one day the FPU was just permanently packaged with the CPU. Same thing with ML/AI acceleration, as we currently see with Apple Silicon CPUs. It'll just become a standard component of the SoC.

    • Except in this case nobody wants this garbage on their PCs because all it adds is annoying evil trash features that users hate, not better graphics.

      • Mark my words one day there will be a systemd process that runs on the NPU.

        • by Rei ( 128717 )

          You know, it's funny... so, LLMs switched to tokens because there's correlations between character sequences, and so if you output one token at a time which represents multiple characters, you get that much more net throughput.

          But now we have speculative prediction (a lightweight model quickly predicts a speculative sequence of many tokens, then the main model simultaneously validates them, finds where it went astray, and continues prediction from there). And the more correlated the outputs are, the more s

          • by Rei ( 128717 )

            (To be clear, there's nothing about the byte-vs-token distinction that prevents one from doing such a thing as-is. It just feels a lot more natural for compute if you're working directly in bytes and can skip tokenization / detokenization stages, and avoids the need to optimize a set of tokens to a specific task)

        • And then the NPU will be required if you want any system logging!

        • Right, but we already have that stuff in our GPUs. And in my case, in my display; I use a 43" 4k TV as a monitor. (The backlight could be better but it's otherwise quite enjoyable.) I could do some upscaling on the GPU and some more on the display if I wanted. Why would I need my CPU to get involved? It might make sense for people buying a CPU with integrated graphics, as it would leave the GPU hardware on the processor free to just render, and then some other hardware could do the upscaling. But in that ca

          • by drnb ( 2434720 )

            It might make sense for people buying a CPU with integrated graphics

            Which is a pretty common scenario, to the lament of game developers worldwide.

      • NVIDIA classifies their DLSS 3 as "AI" and most people who can't afford expensive cards want that feature to increase frame-rates. It's fine if you're not playing competitive fast-paced games.
        https://www.nvidia.com/en-us/g... [nvidia.com]

        AI is practically a buzzword for things that are simply computer algorithms. So when asking people if they want "AI", you need to be very specific about what you mean. This is why the polling was useless because people likely had no clue what "AI" means or what features it brings.

    • by Torodung ( 31985 )

      My thoughts exactly. First thing I thought is "It'll be on-die soon enough."

      Discrete? Why bother? That should sell as well as discrete TPMs.

      • by drnb ( 2434720 )
        On the Apple side it already is on the SoC, Macs, iPhone/iPad and even Watches. I've seen folks working with small ML models for voice apps at my university. Keeps all the processing local, on device, private.
    • by xanthos ( 73578 )
      Anybody remember the AMD APU mashup? How successful was that?
    • Ah, by FPU you mean an NPU, right? A numeric processing unit. Ahem.

    • by Z80a ( 971949 )

      FPU had Quake to push it.
      I'm not entirely sure if they can make another Quake like event, specially one that requires a NPU.

      • by drnb ( 2434720 )

        FPU had Quake to push it. I'm not entirely sure if they can make another Quake like event, specially one that requires a NPU.

        Apple Watches have an NPU. At my local university multiple teams are working on voice based apps that process everything on device using small ML models. Keeps everything nice and private.

        I think we are seriously underestimating the utility of an ML model accelerator, much likely we underestimated the utility of MMX/SSE/AVX. The latter having utility beyond image processing and computer graphics.

    • Except an FPU is very useful generally, even for those not doing complex mathematics. Of course, these days floating point is often ingrained in the CPU and not a co-processor. An NPU has only specific and niche uses. Sort of like touch screens on a laptop or monitor - useful for a few people, pointless for most.

      I remember an early SunOS desktop where it had some small curved corners on windows (I think it was NeWS). The workstations without floating point support you could actually see the corners being

      • by drnb ( 2434720 ) on Wednesday July 17, 2024 @01:54PM (#64633291)

        Except an FPU is very useful generally, even for those not doing complex mathematics.

        Now, because everyone has an FPU. But back in the day people largely thought, don't need it. Software emulation is good enough for me. Autocad, some excel users, and other power users were about the only people that thought an FPU worthwhile.

        Of course, these days floating point is often ingrained in the CPU and not a co-processor.

        Well, ever since the Pentium, the 586. For 486 we had the DX with an FPU on the package, the SX without.

        An NPU has only specific and niche uses.

        Apple is proving otherwise. All Apple Silicon based devices, even watches have an NPU. The watches are running small ML models that allow some local speech analysis, keeping everything local and private. On iPhones the NPU is being used in the photo pipeline.

        The Microsoft/Qualcomm ARM-based CPUs will also be including an NPU. So ARM based Windows PCs will probably come with an NPU.

        One the x86-64 side I think the PC standard will be including an NPU as well in the near future. If you prefer, rather than compare NPU to the FPU, how about comparing it to MMX? Few needed it. However once it became ubiquitous developers felt comfortable utilizing it. And its use spread beyond the originally intended image processing and computer graphics. Something similar will happen with NPUs. They will go far beyond voice and photo processing.

        In short, I think we are underestimating the utility of NPUs, as we did with MMX, SSE, etc.

        • Also, the art of optimizing floating point in software is lost, as well as almost no one at all bothers with fixed point. You'll never see that Doom style of software where they can make an underpowered CPU do amazing things by knowing how math works. Today you just rely on super fast computers that can make even the dumbest code look fast.

          Floating point is ubiquitous. Everything uses, non-mathematical stuff uses it, code without any graphics uses it, programmers don't know how to not use it. When progr

          • by drnb ( 2434720 )

            Also, the art of optimizing floating point in software is lost

            Well, it moved from the FPU stack to SSE3/AVX2. :-)

  • Maybe this direction of thinking is geared towards Apple and the idea that extra hardware is needed for AI. In the recent past, consumers have not paid directly for AI, but they expect it as improvement in services, much if it cloud-based. If anything the question would be whether consumers would be willing to pay extra for AI software, for example, for AI-enhanced photo editing.

  • by awwshit ( 6214476 ) on Wednesday July 17, 2024 @12:10PM (#64632925)

    Why would I pay extra for hardware to support features that do not work well?

    The AI hype-machine is in full swing but the products have limited value. AI needs to be able to check its own work such that I can trust the output without having to double-check the AI. I don't have a personal use case for LLMs or for generating images from text and so I don't need to accelerate these things.

    • by Rei ( 128717 )

      Let me give you a random example, out of millions: ctrl-f on steroids.

      First you have basic search. Find an exact match of text. Maybe case insensitive vs. case sensitive, but that's it.

      Okay, too crude, not enough? Well, we have regex search. Now you can add some flexibility. But that's still going to fail a bunch for any scenario you didn't precisely imagine and spell out. If you searched for "World War" but the text includes "WWII", and you didn't think to special case that, it's not going to find it

      • by Rei ( 128717 )

        Many AI things are already showing up in locally-run tools (though in some cases needing a cloud connection). Let's give an example: Photoshop's generative fill.

        What's your most common graphical editing task? Mine is probably removing something from an image. How do you normally do that? Maybe some old-school Markov-chain-based heal. Probably a lot of clone tool work. Maybe some hand painting. You know what defines this process? "Slow and not very high quality". But you can just use a generative era

        • Actually, here's one even more basic than that: segmentation. Aka, the select tool. Hard to get more fundamental than that in graphics editing, right? And also: *slow*.

          But with NN segmentation tools, you can just type what you want, and voila, it's selected. Even complex descriptions. You can even auto-feather, with varying thicknesses based on context.

          What's your next headache normally? Probably that the edges are tinged by whatever context the selected object was previously near. Guess what? GenAI can

          • Which parts of what you are saying require specialized hardware? How do I know the software I'm using can even take advantage of my more expensive computer?

            • by Rei ( 128717 )

              Everything I'm describing is AI. Runs fast on a NPU or high-end GPU. Runs very slowly on CPU.

    • Why would I pay extra for hardware to support features that do not work well?

      Wrong question. The actual question is "Why would Microsoft not want to extract maximum profits for gullible consumers?"

  • by Lavandera ( 7308312 ) on Wednesday July 17, 2024 @12:15PM (#64632941)

    I see no gain in "AI" features... ChatGPT is useless for more advanced questions...

  • I'm not against having the hardware at all, I'm not mad at it, I just am among those who won't pay extra for it.

    I have a PC because I want a single box that does many things. One of those things is gaming, so I have a discrete GPU. It is modern, so it was designed to be good at "AI" tasks. Therefore I don't need my CPU to have features for that, especially since I have an Nvidia GPU and CUDA is the standard interface for implementing this type of processing. It doesn't matter if you have an Intel or AMD CPU, neither of them is going to provide a CUDA interface. Eventually CUDA will become irrelevant as tools support the other vendors' APIs, but for now, it's extremely relevant.

    Maybe it will eventually make sense for my CPU to have hardware specifically for this purpose, but right now it does not. I would rather spend the money and the die area on cache. That will actually help accelerate my workloads.

    • But, but...Those annoying popups that said your perfectly good working hardware was incompatible with Windows 11?

      I'd be very surprised if an NPU co-processor from Intel/Qualcomm/AMD wasn't a mandatory requirement of Windows 12. i.e. Hardware assisted Copilot. Heck, they've even got a special button on the keyboard already.

      • I solved that problem by switching to fulltiming Linux when Windows 8 was no longer viable. I didn't even mess with 10, although my laptop came with it. I was getting crashes of the AMD graphics driver out of the box, and I solved that problem with Linux. The OSS driver has been flawless. I switched my desktop a little later. If it won't run on Linux, or at least in a Windows VM, I don't need it.

        What's interesting about that (as there is little to nothing about me switching from dual booting to not dual boo

    • by ceoyoyo ( 59147 )

      Your CPU almost certainly has some sort of single instruction, multiple data processing unit in it, either NEON or SSE, AVX, whatever. "AI" units are just the next version of SIMD. Marketing them as "AI accelerators" might be a mistake. Apple didn't really, they called it a neural processing unit, used it to touch up your face on the webcam (and do dictation, OCR, image upscaling, photo editing, etc.), and people complain that their five year old laptop can't do those things, not that it doesn't have AI.

      • Of course my processor has a vector processor in it, it's not an antique. NPUs are different either in that they are actually different in architecture and/or in that they have a lot more functional units and some way to dispatch operations to all of those units. But the really relevant thing here is the cost of adding that hardware, which for serious PCs is redundant and for budget PCs is expensive.

    • by gweihir ( 88907 )

      Maybe it will eventually make sense for my CPU to have hardware specifically for this purpose, but right now it does not. I would rather spend the money and the die area on cache. That will actually help accelerate my workloads.

      Dame here. That is why I have a 7800DX3. Much more useful. Show me a real application I use daily for a significant amount of time and that massively benefits from "AI hardware" and I may be convinced. Or not.

    • GPU != Good at AI. There's a reason why there's an massive and I mean MASSIVE performance difference between a GTX 1080Ti and an RTX2060 when it comes to AI related tasks such as using Topaz labs tools for image processing, despite the former being significantly faster for gaming.

      Maybe it will eventually make sense for my CPU to have hardware specifically for this purpose, but right now it does not.

      Even something as simple as a Teams call can already benefit from the hardware. Just like you benefit from having a hardware video decoder even though you don't do professional video editing. You don't know how the hardware gets us

  • How hard is this to beat? Really? I mean, it must be really hard, right?

    But I just don't know why it's so hard to set up critical networks so that they just don't run software downloaded from the internet.

  • If the AI hardware doesn't accelerate Pytorch or Tensorflow, it's dead on arrival.

    • Your expensive ai hardware will be used to show you more relevant ads. You will love them so much that you will stop watching content.

      • by Falos ( 2905315 )

        Can't tell if sarcastic, a big feature is supposed to be interpretation so look forward to asking it to help find your insurance records, then getting "helpful" offerings and recommendations in the sort of chipper tone you'd expect from nu clippy.

  • No thanks, I can take care of myself. I don't need ai spying on me and showing me content in between ads.

  • That will be used to deny your computer an upgrade in the future.
  • Seriously, that is just stuff to make hardware more expensive but not more useful.

    • by Junta ( 36770 )

      Well, not necessarily more expensive, as usually the "AI" enabled will pretty much end up meaning "Any processor made in 2024 until marketing gets bored of talking it up and doesn't bother saying it anymore (like floating point units)".

  • by Bill, Shooter of Bul ( 629286 ) on Wednesday July 17, 2024 @01:11PM (#64633171) Journal
    If its to avoid going to the cloud and sharing my data there. fine ok.

    If its hardware, or data sent to cloud. I'll spring for the hardware.
    If its hardware and ai service for free, I'll spring for the hardware.
    If its hardware and pay for ai service, No thanks.
    If I have a choice of having AI service and hardware or no ai service, I'll take no hardware and no ai service.

    Its a marginal benefit. If I have a choice and it saves me money or privacy, I'll do it. But if I can opt out I'm opting out.
  • I honestly don't feel comfortable with it If it's in a rinky dink typical consumer device, fine, but I don't feel comfortable with it in my computer that I do all of my work and my projects on. At the very least I demand to know exactly what the AI does and exactly what it's capable of affecting and reaching, and to be able to 100% shut it down up front and east, with no 'stubs' or any other part of it continuing to run in the background when it's supposed to be "OFF".
  • The only reason to be having specialized AI-Hardware is if you're going to train large language models which is basically what most of the current AI is based on. That is, if you're developing the AI yourself to bring to someone else those "features."

    Otherwise, what the heck are those AI features?

    Maybe the question should be: Are you willing to train LLM for us? In which case, you are not only giving your data for free but you have also paid them to get it from you.

  • Seriously, how many apps actually use these built-it CPU AI chips? Don't most LLMs use the GPU anyway? Why pay extra for hardware that we probably never use?
  • AI will actually be useful, but today's AI is being deployed long before it's ready
    I would pay extra to avoid all of the half-baked AI crap that is headed our way

  • by Voyager529 ( 1363959 ) <voyager529.yahoo@com> on Wednesday July 17, 2024 @03:25PM (#64633507)

    I remember when video editing on a PC was still a nascent technology. Ever used Adobe Premiere on Windows 98, on a Pentium 3? The first time I edited video, I did it on a computer with less system resources than a Raspberry Pi. And, it was agonizing. On a good day, you got real-time playback of your downscaled proxy clips. 3D effects and color correction were overnight renders, always.

    And then, we got these:

    Real-time playback of everything? Real-time 3D effects and transitions? Real-time MPEG-2 renders?! It was the sort of hardware addition that made pretty much every video editor (who wasn't still stuck using Avid as a proxy for a film workflow) say "shut up and take my money!!". It's easy to take it for granted now, since cell phones have NLEs if you really want to edit video that way, and even low end GeForce cards can render 1080p x265 at 4x or 5x realtime with CUDA accelerated effects...but in 2001, these things were absolute magic.

    The reason they were seen that way is because the accelerator came after the task was established. Video editing on a P3 was doable, but limited and slow. Accelerator cards like the RT2500 (and its contemporaries from Canopus and a few others) were worth their price tag because they added so much additional performance and functionality, that they sold themselves.

    AI is still, like blockchain, a solution in search of a problem for most. There are certainly use cases; the ability to extract vocals from album edits of songs is incredibly helpful. ChatGPT and the other chatbots get some deserved criticism for being wrong lots of the time, but they are half decent for some entry-level tasks. I'm sure there's AI at banks already, checking for fraud. Perhaps the best example is some of the photo tagging and matching that's seen in Google Photos...but none of these are the sort of things that are more useful on the consumer market than just using the website, which runs on anything. More to the point, those web-based tasks aren't going to be able to usefully leverage a local NPU anyway. Even Microsoft could barely make a use case for Copilot+, and both they and Google are running up against privacy concerns that either hamstring the usefulness of the service, or make it so generic that it's no better than what can be done without an NPU.

    If AI is to sell hardware for companies other than nVidia, and the goal is to sell it to companies other than Google, OpenAI, and Mastercard, then the first part really needs to be "establishing a problem that the customer cares about solving". AI still hasn't managed to establish that, and without it, it'll be just like blockchain before it: A solution in search of a problem. To AI's credit, it *does* have at least *some* use cases that are beneficial to end users...but beneficial enough to go out and buy dedicated NPUs? I'm unconvinced, and it looks like the survey indicates that my skepticism is far from unique.

  • Whenever you tell tech that nobody likes their idea, they usually just double down and try to force people to accept it....because how can they be wrong?

If you do something right once, someone will ask you to do it again.

Working...