84% of PC Users Unwilling To Pay Extra For AI-enhanced Hardware, Survey Says (videocardz.com) 183
An anonymous reader shares a report: A recent poll on TechPowerUp revealed that an overwhelming majority of PC users are not interested in paying extra for hardware with AI capabilities. According to the survey, 84% of respondents would not spend more for AI features, while only 7% said they would, and 9% were unsure. The poll data was already contributed by over 26K responders. This indicates that despite the PC market's shift toward integrating AI, most enthusiasts remain skeptical of its value. This suggests that hardware companies should pay attention to the preferences of their core user base. Currently, enthusiasts, who no doubt represent the majority of users on TechPowerUP, show little interest in AI features.
Makes sense (Score:5, Insightful)
Highly advanced AI is not needed on 90% of PC unless it’s mining your behavior for someone else to use. Your PC already has enough AI power to be useful for personal things (voice recognition, media classification, etc.). Note I said 90%, there are about 10% of users who may need local AI capabilities because they are doing video production or something like that. Though even that’s iffy (why not push that out to the cloud?).
Re: (Score:2)
Though even that’s iffy (why not push that out to the cloud?).
The vendors of video production s/ware will want^W force you to use their cloud since they make more money on a subscription model.
Re: (Score:2)
Note I said 90%, there are about 10% of users who may need local AI capabilities because they are doing video production or something like that. ,
You always buy hardware for the past or for the future? Just because a few production tools are a big use case for AI doesn't mean it stays like that. There are already many consumer level apps using AI acceleration features. Things as simple as video calls benefit from it for background mic noise reduction just as an example.
This will be like 3d acceleration is today, where even the 2D desktop uses portions of that hardware. You won't even be aware what will and won't use hardware that becomes ubiquitous.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
I don’t think so. At least on my M2 iPad Air, it works fine even with WiFi turned off.
We have been doing voice recognition on desktop computers for >25 years now, and the M2 is way faster than the Pentium I had back then. Voice recognition also works a lot better, the Apple Neural Engine almost certainly helps and on other devices Google Tensor, NVidia Cuda etc would also help.
FUD and GREEN SHIFT (Score:2)
This is a GREEN SHIFT (TM) of shifting who pays for computer electricity usage from the 1970s mainframe, to 1990s servers, to 2000s cloud, and now to the desktop.
Tech cannot call themselves 'carbon neutral' since they shifted considerable processing electricity cost to the end user computer.
Re: (Score:3)
My thinking on this is that most likely, even if you have the AI capability on your local computer, all the software products you use for it, not to mention the operating system itself will force you to put everything on the cloud anyway. There it will be indexed and searchable by any company who has any sort of business relationship with the OS company or with any of your other software vendors. The data will be used to train LLMs which, having been trained on your private information, including tax return
What is "AI-Enhanced hardware"? (Score:3, Insightful)
Simple question.
Everything has a GPU. Beyond that, what are we talking about? Just more GPU, or is there actually something meaningful in mind here?
Re:What is "AI-Enhanced hardware"? (Score:5, Insightful)
Usually they're talking about an NPU integrated into the CPU/SoC. Yes, dGPUs typically have more TOPs than an NPU, but there may be some circumstances where you want an NPU on package for lower latency. Plus it's a buzzword thing that certain parties like Qualcomm and Microsoft are pushing on OEMs.
Re:What is "AI-Enhanced hardware"? (Score:5, Insightful)
Thanks. Useful answer.
Now, I go to Wikipedia and read about NPUs: this leads to "AI accelerator". Here, I learn that these devices span a wide spectrum of circuit designs, and that "it is an emerging technology without a dominant design." That latter part, no "dominant design," is a huge red flag. The odds are whatever you buy today is a throwaway: application specific circuits that won't be supported in the near future as the designers muddle through the evolution of new devices.
This stuff belongs on a plugin device. That is how PCs have always solved this kind of problem: when the "new thing" is too young to solder to the motherboard or integrated into the CPU, we put it on a card or a USB device or some other attachment.
The push on OEMs to integrate "something" looks like extremely premature optimization: doomed to failure, in other words. Whatever half-baked, throw-away NPU stuff they manage to foist onto people today will be so much dead silicon.
Re: (Score:2)
That might be nice if you can fit the whole model onto the accelerator, but if not, you need as high bandwidth with CPU and memory as possible.
Re: (Score:2)
Because GPUs don't need to move massive amounts of data across the PCI bus already? That's a solved problem already.
Seriously, if a PC user is saturating PCI-E, then they bought inadequate hardware for the task they're performing and should be using server hardware that sports more PCI-E lanes for more throughput.
Re: (Score:2)
PCIe doesn't necessarily help you overcome memory constraints. It's a lot easier to lean on system Ram in those circumstances, though slow main memory brings in its own problems.
Re: What is "AI-Enhanced hardware"? (Score:2)
You continue to underestimate the bandwidth limitations. NVLink is way faster than PCIe 5/6, and is still the bottleneck.
Re: (Score:3)
Just an FYI but most of these NPUs are aimed at INT4/8/16 compute performance. So long as a common programming interface supports them for major AI frameworks, you'll be fine. They'll go obsolete over time the same way any CPU does (it's slow compared to future gens), but it's not like an ASIC that supports only a me algo and risks being totally useless in a few years.
Re: (Score:2)
I meant single, not me. Weird typo.
Re:What is "AI-Enhanced hardware"? (Score:4, Interesting)
GPU compute? Doesn't count, regardless of how it scores on the tasks the NPU is dedicated to. CPU instructions aimed at vector operations or the types of floats that the 'AI' types prefer? Doesn't qualify, because reasons. At least publicly, there's not even a requirement that the 'NPU' be better at its job than either CPU or GPU compute(I'd assume, in practice, it's at least not worse, since there's no incentive for the chip vendor to do something less efficient than copy-pasting whatever existing component is closest to being right, presumably either FPU or part of the GPU); if the 'AI' task ends up on the CPU you aren't 'AI-enhanced'; but if firing up the NPU forces the CPU cores to throttle to remain within the TDP target or starves the GPU of memory bandwidth or the like that's not against the rules.
I'm not sure whether there's actually some good-faith technical reason, and there's some current or near-future workload that was seen as impossible or impractically inefficient without specific new hardware(though, even in that case, it's not clear why that would mean 'NPU' rather than 'has to be able to do X without using more than whatever percent of total CPU or GPU compute resources, however you feel like doing it"), or whether it's mostly about Microsoft wanting to sidestep Nvidia's high ground on GPU compute and Intel's tendency to introduce new CPU instructions by mandating a separately exposed peripheral that answers to their requirements that Qualcomm(and potentially other ARM vendors in the future), AMD, and Intel could all implement; or if there's an even more shameless desire to push some PC refreshes.
I'm also not sure how well they'll be able to stick to it. I assume that Nvidia is...deeply unimpressed...by the fact that anything without an NPU is not "AI-enhanced" even if it's got a 4090 worth of GPU compute; and all Intel's marketing materials that talk about 'AI' performance(even on client systems; ignoring the stuff aimed at workstation and datacenter) acknowledge that they are including an NPU that ticks Microsoft's box; but then go on to talk up "Platform TOPS" which are the ones provided by everything that isn't the NPU and totally make the system's numbers bigger. That doesn't sound like deep commitment to the concept. The...somewhat rocky...launch of "Copilot Plus" PCs(good work on 'Recall', guys!) may not help; nor will the ongoing ambiguity about whether "AI-enhanced" is supposed to be some sort of binary thing; where the user is supposed to just look for the marketing sticker and buy whatever has it; or whether it's supposed to be an actual performance number where some systems are better than others. If it's supposed to be a binary thing Microsoft has basically committed to not doing anything 'AI-Enhanced'(at least on the client) that won't work on whatever NPU is weakest of the launch generation at least until those systems are EOL or they come up with an even sillier marketing sticker; but silicon vendors and OEMs aren't going to like "they're all basically interchangeable; just buy whatever" since they spend a lot of time on trying to one-up one another on performance.
Re: (Score:2)
While there is some technical things, it's largely a branding exercise in practice. A barebones 'NPU' that qualifies as "AI enhanced" is unlikely going to deliver a very noticable different behavior than what's already on the systems.
To the extent people have been messing with AI, they've been doing so without "AI enhanced", so they understandably will scratch their heads to find out that somehow their current AI excursions have been done without "AI enhancement". Either they've been doing the relatively
Re: (Score:2)
If you open up the "Performance" tab it shows CPU and GPU utilization and memory capacity usage; but not package power or memory bandwidth usage; so if you fire up some relatively lightweight 'AI' thing, doing background removal on a video call or something, you'll be able to show it eating 25% of the CPU on the old-and-busted laptop
Re:What is "AI-Enhanced hardware"? (Score:4, Interesting)
Everything has a GPU. Beyond that, what are we talking about? Just more GPU, or is there actually something meaningful in mind here?
See this article [forbes.com] at Forbes for a quick tech-light view. Basically you need sufficient CPU and storage. The real gear is a neural processing unit which can do 40 TOPS (where I think an op is a fused multiply-add, probably with 32-bit floats). You also need sufficient video RAM to feed the GPU/NPU. You need something like 2-4 bytes per parameter so a 5 billion parameter model needs 10-20 GB to run locally.
That's the key part: being able to run a model on the PC rather than in the cloud. I think the promise is you'll get much lower latency responses. We keep changing our minds about "fat clients are great!" and "central server farms are great!" for the last 40 years. I think this is just another oscillation.
It used to be you needed a super-de-duper PC to edit video, now any phone can do it. In a few years I expect you'll find every PC is capable of running models tuned to run on laptops and it will be no big whoop.
Re: (Score:2)
You need something like 2-4 bytes per parameter so a 5 billion parameter model needs 10-20 GB to run locally.
OK, but GPUs with 16GB+ are not exactly scarce now and 12GB or so is extremely common. So basically people with even slightly serious PCs (mine is around $1100 all in and I have a 4060 16GB) already have the hardware to do this. Meanwhile the people who have cheaper ones were already trying to save money, and won't want to spend more on a component they don't need. They also don't have the RAM, budget systems still come with 8GB.
Re: (Score:2)
Um, huh? Running FP32 models is rarely ever necessary, and even FP16 is overkill in most situations. I run Mixtral quantized to just 3-4 *bits* per param.
That said, the better the models get, the harder it gets to quantize them well unless it was done with quantization-aware training. Anyway, we're probably going to jump to ternary eventually... one trit per param.
But yeah... VRAM is king in AI. And if you can't fit it all onto one GPU/NPU, then its bandwidth with whatever it's sharing with that's the l
Re: (Score:2)
See this article at Forbes for a quick tech-light view. Basically you need sufficient CPU and storage. The real gear is a neural processing unit which can do 40 TOPS (where I think an op is a fused multiply-add, probably with 32-bit floats). You also need sufficient video RAM to feed the GPU/NPU. You need something like 2-4 bytes per parameter so a 5 billion parameter model needs 10-20 GB to run locally.
/w LLMs it is something like 5 bits per parameter or less. Anything more than that and you are wasting resources for imperceptible gain. A 5B model should require about 3GB or less.
While I think NPUs might be really useful for other applications when it comes to LLMs some CPUs can already saturate available bandwidth. Sure NPUs or specialized instructions (e.g. AMX) can do it way cheaper and way lower power but it isn't going to really win much in the way of performance because you will still be limited
Re: (Score:2)
Similar to the FPU back in the day (Score:5, Insightful)
A recent poll on TechPowerUp revealed that an overwhelming majority of PC users are not interested in paying extra for hardware with AI capabilities
Much like the majority declined to pay extra for a FPU back in the day. Then one day the FPU was just permanently packaged with the CPU. Same thing with ML/AI acceleration, as we currently see with Apple Silicon CPUs. It'll just become a standard component of the SoC.
Re: Similar to the FPU back in the day (Score:2)
Except in this case nobody wants this garbage on their PCs because all it adds is annoying evil trash features that users hate, not better graphics.
Re: (Score:2)
Mark my words one day there will be a systemd process that runs on the NPU.
Re: (Score:2)
You know, it's funny... so, LLMs switched to tokens because there's correlations between character sequences, and so if you output one token at a time which represents multiple characters, you get that much more net throughput.
But now we have speculative prediction (a lightweight model quickly predicts a speculative sequence of many tokens, then the main model simultaneously validates them, finds where it went astray, and continues prediction from there). And the more correlated the outputs are, the more s
Re: (Score:2)
(To be clear, there's nothing about the byte-vs-token distinction that prevents one from doing such a thing as-is. It just feels a lot more natural for compute if you're working directly in bytes and can skip tokenization / detokenization stages, and avoids the need to optimize a set of tokens to a specific task)
Re: (Score:2)
And then the NPU will be required if you want any system logging!
Re: (Score:2)
It's better graphics too.
https://www.nvidia.com/en-us/g... [nvidia.com]
https://www.nvidia.com/en-us/s... [nvidia.com]
https://blogs.nvidia.com/blog/... [nvidia.com]
Re: (Score:2)
Right, but we already have that stuff in our GPUs. And in my case, in my display; I use a 43" 4k TV as a monitor. (The backlight could be better but it's otherwise quite enjoyable.) I could do some upscaling on the GPU and some more on the display if I wanted. Why would I need my CPU to get involved? It might make sense for people buying a CPU with integrated graphics, as it would leave the GPU hardware on the processor free to just render, and then some other hardware could do the upscaling. But in that ca
Re: (Score:2)
It might make sense for people buying a CPU with integrated graphics
Which is a pretty common scenario, to the lament of game developers worldwide.
DLSS 3 (Score:2)
NVIDIA classifies their DLSS 3 as "AI" and most people who can't afford expensive cards want that feature to increase frame-rates. It's fine if you're not playing competitive fast-paced games.
https://www.nvidia.com/en-us/g... [nvidia.com]
AI is practically a buzzword for things that are simply computer algorithms. So when asking people if they want "AI", you need to be very specific about what you mean. This is why the polling was useless because people likely had no clue what "AI" means or what features it brings.
Re: (Score:2)
DLSS 3 frame generation and upscale technology for games? Especially if you can't afford high end hardware to run demanding games at good framerates. People seem to love AI-enhanced movies, so why not enhanced game frames?
https://www.nvidia.com/en-us/g... [nvidia.com]
Re: (Score:2)
My thoughts exactly. First thing I thought is "It'll be on-die soon enough."
Discrete? Why bother? That should sell as well as discrete TPMs.
Re: (Score:2)
Re: (Score:2)
Re:Similar to the FPU back in the day (Score:4, Interesting)
Highly successful. To the point where every mobile x64 CPU currently does it, even when shipped on laptops with extra GPUs.
Re: (Score:2)
Ah, by FPU you mean an NPU, right? A numeric processing unit. Ahem.
Re: (Score:2)
FPU had Quake to push it.
I'm not entirely sure if they can make another Quake like event, specially one that requires a NPU.
Re: (Score:3)
FPU had Quake to push it. I'm not entirely sure if they can make another Quake like event, specially one that requires a NPU.
Apple Watches have an NPU. At my local university multiple teams are working on voice based apps that process everything on device using small ML models. Keeps everything nice and private.
I think we are seriously underestimating the utility of an ML model accelerator, much likely we underestimated the utility of MMX/SSE/AVX. The latter having utility beyond image processing and computer graphics.
Re: (Score:2)
Except an FPU is very useful generally, even for those not doing complex mathematics. Of course, these days floating point is often ingrained in the CPU and not a co-processor. An NPU has only specific and niche uses. Sort of like touch screens on a laptop or monitor - useful for a few people, pointless for most.
I remember an early SunOS desktop where it had some small curved corners on windows (I think it was NeWS). The workstations without floating point support you could actually see the corners being
If you prefer, like MMX back in the day (Score:4, Interesting)
Except an FPU is very useful generally, even for those not doing complex mathematics.
Now, because everyone has an FPU. But back in the day people largely thought, don't need it. Software emulation is good enough for me. Autocad, some excel users, and other power users were about the only people that thought an FPU worthwhile.
Of course, these days floating point is often ingrained in the CPU and not a co-processor.
Well, ever since the Pentium, the 586. For 486 we had the DX with an FPU on the package, the SX without.
An NPU has only specific and niche uses.
Apple is proving otherwise. All Apple Silicon based devices, even watches have an NPU. The watches are running small ML models that allow some local speech analysis, keeping everything local and private. On iPhones the NPU is being used in the photo pipeline.
The Microsoft/Qualcomm ARM-based CPUs will also be including an NPU. So ARM based Windows PCs will probably come with an NPU.
One the x86-64 side I think the PC standard will be including an NPU as well in the near future. If you prefer, rather than compare NPU to the FPU, how about comparing it to MMX? Few needed it. However once it became ubiquitous developers felt comfortable utilizing it. And its use spread beyond the originally intended image processing and computer graphics. Something similar will happen with NPUs. They will go far beyond voice and photo processing.
In short, I think we are underestimating the utility of NPUs, as we did with MMX, SSE, etc.
Re: (Score:2)
Also, the art of optimizing floating point in software is lost, as well as almost no one at all bothers with fixed point. You'll never see that Doom style of software where they can make an underpowered CPU do amazing things by knowing how math works. Today you just rely on super fast computers that can make even the dumbest code look fast.
Floating point is ubiquitous. Everything uses, non-mathematical stuff uses it, code without any graphics uses it, programmers don't know how to not use it. When progr
Re: (Score:2)
Also, the art of optimizing floating point in software is lost
Well, it moved from the FPU stack to SSE3/AVX2. :-)
Is AI a hardware thing? (Score:2)
Maybe this direction of thinking is geared towards Apple and the idea that extra hardware is needed for AI. In the recent past, consumers have not paid directly for AI, but they expect it as improvement in services, much if it cloud-based. If anything the question would be whether consumers would be willing to pay extra for AI software, for example, for AI-enhanced photo editing.
extra cost, inaccurate results (Score:5, Interesting)
Why would I pay extra for hardware to support features that do not work well?
The AI hype-machine is in full swing but the products have limited value. AI needs to be able to check its own work such that I can trust the output without having to double-check the AI. I don't have a personal use case for LLMs or for generating images from text and so I don't need to accelerate these things.
Re: (Score:2)
Let me give you a random example, out of millions: ctrl-f on steroids.
First you have basic search. Find an exact match of text. Maybe case insensitive vs. case sensitive, but that's it.
Okay, too crude, not enough? Well, we have regex search. Now you can add some flexibility. But that's still going to fail a bunch for any scenario you didn't precisely imagine and spell out. If you searched for "World War" but the text includes "WWII", and you didn't think to special case that, it's not going to find it
Re: (Score:2)
Many AI things are already showing up in locally-run tools (though in some cases needing a cloud connection). Let's give an example: Photoshop's generative fill.
What's your most common graphical editing task? Mine is probably removing something from an image. How do you normally do that? Maybe some old-school Markov-chain-based heal. Probably a lot of clone tool work. Maybe some hand painting. You know what defines this process? "Slow and not very high quality". But you can just use a generative era
Re: extra cost, inaccurate results (Score:2)
Actually, here's one even more basic than that: segmentation. Aka, the select tool. Hard to get more fundamental than that in graphics editing, right? And also: *slow*.
But with NN segmentation tools, you can just type what you want, and voila, it's selected. Even complex descriptions. You can even auto-feather, with varying thicknesses based on context.
What's your next headache normally? Probably that the edges are tinged by whatever context the selected object was previously near. Guess what? GenAI can
Re: (Score:2)
Which parts of what you are saying require specialized hardware? How do I know the software I'm using can even take advantage of my more expensive computer?
Re: (Score:2)
Everything I'm describing is AI. Runs fast on a NPU or high-end GPU. Runs very slowly on CPU.
Re: (Score:2)
Why would I pay extra for hardware to support features that do not work well?
Wrong question. The actual question is "Why would Microsoft not want to extract maximum profits for gullible consumers?"
Re: extra cost, inaccurate results (Score:2)
âoeAI image generationâ. Stupid ai autocorrect.
Re: (Score:2)
Says a person who doesn't realize the difference between Markov chains and Transformers. *eyeroll*
Re: (Score:2)
Because Markov Chains was a boring movie without a single explosion or merchandise tie-in!
I prefer longer battery life... (Score:3)
I see no gain in "AI" features... ChatGPT is useless for more advanced questions...
Re: (Score:2)
If you enjoy and benefit from seeing more context and interesting side-items, ChatGPT is useless for simple questions as well.
Re: (Score:2)
https://www.nvidia.com/en-us/g... [nvidia.com]
People seem to love DLSS if they can't afford $1500 cards to render good looking games at high refresh rates.
I don't need one, I have a GPU. (Score:3)
I'm not against having the hardware at all, I'm not mad at it, I just am among those who won't pay extra for it.
I have a PC because I want a single box that does many things. One of those things is gaming, so I have a discrete GPU. It is modern, so it was designed to be good at "AI" tasks. Therefore I don't need my CPU to have features for that, especially since I have an Nvidia GPU and CUDA is the standard interface for implementing this type of processing. It doesn't matter if you have an Intel or AMD CPU, neither of them is going to provide a CUDA interface. Eventually CUDA will become irrelevant as tools support the other vendors' APIs, but for now, it's extremely relevant.
Maybe it will eventually make sense for my CPU to have hardware specifically for this purpose, but right now it does not. I would rather spend the money and the die area on cache. That will actually help accelerate my workloads.
Re: (Score:3)
But, but...Those annoying popups that said your perfectly good working hardware was incompatible with Windows 11?
I'd be very surprised if an NPU co-processor from Intel/Qualcomm/AMD wasn't a mandatory requirement of Windows 12. i.e. Hardware assisted Copilot. Heck, they've even got a special button on the keyboard already.
Re: (Score:2)
I solved that problem by switching to fulltiming Linux when Windows 8 was no longer viable. I didn't even mess with 10, although my laptop came with it. I was getting crashes of the AMD graphics driver out of the box, and I solved that problem with Linux. The OSS driver has been flawless. I switched my desktop a little later. If it won't run on Linux, or at least in a Windows VM, I don't need it.
What's interesting about that (as there is little to nothing about me switching from dual booting to not dual boo
Re: (Score:2)
Your CPU almost certainly has some sort of single instruction, multiple data processing unit in it, either NEON or SSE, AVX, whatever. "AI" units are just the next version of SIMD. Marketing them as "AI accelerators" might be a mistake. Apple didn't really, they called it a neural processing unit, used it to touch up your face on the webcam (and do dictation, OCR, image upscaling, photo editing, etc.), and people complain that their five year old laptop can't do those things, not that it doesn't have AI.
Re: (Score:2)
Of course my processor has a vector processor in it, it's not an antique. NPUs are different either in that they are actually different in architecture and/or in that they have a lot more functional units and some way to dispatch operations to all of those units. But the really relevant thing here is the cost of adding that hardware, which for serious PCs is redundant and for budget PCs is expensive.
Re: (Score:2)
Maybe it will eventually make sense for my CPU to have hardware specifically for this purpose, but right now it does not. I would rather spend the money and the die area on cache. That will actually help accelerate my workloads.
Dame here. That is why I have a 7800DX3. Much more useful. Show me a real application I use daily for a significant amount of time and that massively benefits from "AI hardware" and I may be convinced. Or not.
Re: (Score:2)
GPU != Good at AI. There's a reason why there's an massive and I mean MASSIVE performance difference between a GTX 1080Ti and an RTX2060 when it comes to AI related tasks such as using Topaz labs tools for image processing, despite the former being significantly faster for gaming.
Maybe it will eventually make sense for my CPU to have hardware specifically for this purpose, but right now it does not.
Even something as simple as a Teams call can already benefit from the hardware. Just like you benefit from having a hardware video decoder even though you don't do professional video editing. You don't know how the hardware gets us
Why does it even run downloaded applications? (Score:2)
How hard is this to beat? Really? I mean, it must be really hard, right?
But I just don't know why it's so hard to set up critical networks so that they just don't run software downloaded from the internet.
Re: (Score:2)
Sorry, I'm lost.
AI Hardware (Score:2)
If the AI hardware doesn't accelerate Pytorch or Tensorflow, it's dead on arrival.
Re: (Score:2)
Your expensive ai hardware will be used to show you more relevant ads. You will love them so much that you will stop watching content.
Re: (Score:2)
Can't tell if sarcastic, a big feature is supposed to be interpretation so look forward to asking it to help find your insurance records, then getting "helpful" offerings and recommendations in the sort of chipper tone you'd expect from nu clippy.
Re: (Score:2)
That gives a chicken and egg problem. People won't want to update the libraries to support that hardware unless they have the hardware, but they won't want the hardware if the libraries don't support that hardware.
Only way out of that is official support from the vendor, or open-source developers who implement it in. And these devs will need the hardware for that. Hardware that doesn't support those libraries yet.
Then the problem becomes that AI software is mostly memory-bandwidth bound rather than compu
No thanks (Score:2)
No thanks, I can take care of myself. I don't need ai spying on me and showing me content in between ads.
It's the new TPM (Score:2)
And 16% do not understand the question... (Score:2)
Seriously, that is just stuff to make hardware more expensive but not more useful.
Re: (Score:2)
Well, not necessarily more expensive, as usually the "AI" enabled will pretty much end up meaning "Any processor made in 2024 until marketing gets bored of talking it up and doesn't bother saying it anymore (like floating point units)".
What to use it for? (Score:3)
If its hardware, or data sent to cloud. I'll spring for the hardware.
If its hardware and ai service for free, I'll spring for the hardware.
If its hardware and pay for ai service, No thanks.
If I have a choice of having AI service and hardware or no ai service, I'll take no hardware and no ai service.
Its a marginal benefit. If I have a choice and it saves me money or privacy, I'll do it. But if I can opt out I'm opting out.
I don't feel comfortable with it yet (Score:2)
Re: (Score:2)
What Are The AI Features? (Score:2)
The only reason to be having specialized AI-Hardware is if you're going to train large language models which is basically what most of the current AI is based on. That is, if you're developing the AI yourself to bring to someone else those "features."
Otherwise, what the heck are those AI features?
Maybe the question should be: Are you willing to train LLM for us? In which case, you are not only giving your data for free but you have also paid them to get it from you.
...that we'll never use (Score:2)
Maybe someday... (Score:2)
AI will actually be useful, but today's AI is being deployed long before it's ready
I would pay extra to avoid all of the half-baked AI crap that is headed our way
Still a solution in search of a problem (Score:5, Informative)
I remember when video editing on a PC was still a nascent technology. Ever used Adobe Premiere on Windows 98, on a Pentium 3? The first time I edited video, I did it on a computer with less system resources than a Raspberry Pi. And, it was agonizing. On a good day, you got real-time playback of your downscaled proxy clips. 3D effects and color correction were overnight renders, always.
And then, we got these:
Real-time playback of everything? Real-time 3D effects and transitions? Real-time MPEG-2 renders?! It was the sort of hardware addition that made pretty much every video editor (who wasn't still stuck using Avid as a proxy for a film workflow) say "shut up and take my money!!". It's easy to take it for granted now, since cell phones have NLEs if you really want to edit video that way, and even low end GeForce cards can render 1080p x265 at 4x or 5x realtime with CUDA accelerated effects...but in 2001, these things were absolute magic.
The reason they were seen that way is because the accelerator came after the task was established. Video editing on a P3 was doable, but limited and slow. Accelerator cards like the RT2500 (and its contemporaries from Canopus and a few others) were worth their price tag because they added so much additional performance and functionality, that they sold themselves.
AI is still, like blockchain, a solution in search of a problem for most. There are certainly use cases; the ability to extract vocals from album edits of songs is incredibly helpful. ChatGPT and the other chatbots get some deserved criticism for being wrong lots of the time, but they are half decent for some entry-level tasks. I'm sure there's AI at banks already, checking for fraud. Perhaps the best example is some of the photo tagging and matching that's seen in Google Photos...but none of these are the sort of things that are more useful on the consumer market than just using the website, which runs on anything. More to the point, those web-based tasks aren't going to be able to usefully leverage a local NPU anyway. Even Microsoft could barely make a use case for Copilot+, and both they and Google are running up against privacy concerns that either hamstring the usefulness of the service, or make it so generic that it's no better than what can be done without an NPU.
If AI is to sell hardware for companies other than nVidia, and the goal is to sell it to companies other than Google, OpenAI, and Mastercard, then the first part really needs to be "establishing a problem that the customer cares about solving". AI still hasn't managed to establish that, and without it, it'll be just like blockchain before it: A solution in search of a problem. To AI's credit, it *does* have at least *some* use cases that are beneficial to end users...but beneficial enough to go out and buy dedicated NPUs? I'm unconvinced, and it looks like the survey indicates that my skepticism is far from unique.
Missed The Link (Score:2)
Matrox RT2500 [tomshardware.com]
They'll just double down (Score:2)
Whenever you tell tech that nobody likes their idea, they usually just double down and try to force people to accept it....because how can they be wrong?
Re: 99% of people didn't want cars either (Score:3, Informative)
Dumb comment.
Re: (Score:3)
that's generally true, but i don't think it applies to companies racing to push obviously nonsensical features on hardware just because of hype and novelty. which is why your analogy is a reductio ad absurdum, hence dumb.
Re: (Score:2)
Re: (Score:2)
Well, if they can stop the shit from hallucinating, then maybe there will be a use case. Until then, they're releasing this half-baked because they know they need the training data of in-the-wild usage to fully bake it. QA and data. They need both. Some of us are going to volunteer for them?
Just say no. Or you get paid for using their incomplete product to help them out. Definitely don't pay them for it.
That's why this is getting shoved down our throats like Reels on Instagram. It's all about their business
99% of adults, don’t want a toy. (Score:2)
What with these automobile contraptions!?!?!? My horse doesn't need gas, can make new ones every few years, and is a good friend too. Why would I want all the headaches with a car?!
Selling the 20MPH automobile upgrade to the 10MPH buggy man back in the day, was probably a lot like going from a handjob to a blowjob. Probably only took one good experience behind the wheel.
In contrast, AI is offering up handjobs to blowjob fans. Laughably childish doesn’t even begin to describe the premature sales pitch.
Re: (Score:3)
Clearly you've never had a good handjob.
And no, I'm not offering.
Re: (Score:2)
Re: (Score:2)
The report is stating consumers do not want to pay more for AI hardware right now.. No part of the report questioning the feasibility of AI or that consumers may be willing to pay more in the future.
We watched the average premium smartphone price increase considerably over the years, putting features in phones no one asked for. We watched them make thinner and thinner hardware, to the point of stupidity. Soldering addiction eradicating the concept of upgrading hardware later, or with a more reasonably priced third party.
Then we started watching the collusive behavior set in with similar design features spreading like cancer across manufacturers.
Perhaps we need a report on why consumers assume they
Re: (Score:2)
But in many parts of the world, mass transit involves getting between cities as well, or from city to nature and back, with plenty of trips to suburban fringes. I had friends who took a bicycle from a 50 person village surrounded by woodland, to a train station, and from there to the university, work, or the major cities. Visiting Japan, a long way from the Tokyo metropolis and in a lower population density area, children were taking the train to school. Whereas in 95% of America, if you don't have acces
Re:No worries (Score:5, Interesting)
On the plus side, someone may even find a good use for the hardware. Eventually.
Last time we did that, it resulted in gamers paying insane prices for 3D cards and a world dealing with many a $hitcoin operation sucking on power grids like a cancer.
I wouldn’t be too optimistic.
Re: (Score:2)
One can use 3D cards for neutron diffusion calculations, for example, as they speed up work very nicely.
Re: (Score:2)
Re: (Score:2)
Everyone already does :)