


Five Nvidia CUDA-Enabled Apps Tested 134
crazipper writes "Much fuss has been made about Nvidia's CUDA technology and its general-purpose computing potential. Now, in 2009, a steady stream of launches from third-party software developers sees CUDA gaining traction at the mainstream. Tom's Hardware takes five of the most interesting desktop apps with CUDA support and compares the speed-up yielded by a pair of mainstream GPUs versus a CPU-only. Not surprisingly, depending on the workload you throw at your GPU, you'll see results ranging from average to downright impressive."
Nice, but... (Score:2, Funny)
post.push("First!");
All fine and dandy, but...does it run Linux?
Re:Nice, but... (Score:5, Informative)
Re: (Score:2, Informative)
Queue mip-mapped, 8xAA, subpixel rendered, fogged, PhysX enhanced flyby of a 'Whoosh' passing over your head.
The question was not whether CUDA runs _on_ Linux, but whether the GPU itself can run Linux.
I can imagine that, if we had ever been given all the specs, a multi-function DSP card like IBM's Mwave could. It would probably even be able to read aloud console messages (besides being a graphics card and modem, it's also a sound card).
Re: (Score:2)
Queue mip-mapped, 8xAA, subpixel rendered, fogged, PhysX enhanced flyby of a 'Whoosh' passing over your head.
What, this thing runs on AA batteries? Sweet.
And as a side note, unless you were talking about a long line of whooshes, the word you were looking for is "cue".
Re: (Score:2)
Re:Nice, but... (Score:5, Informative)
I know you are trolling, but actually CUDA applications work better on Linux than on Windows. If you run a CUDA kernel on Windows that lasts longer than 5~6 seconds, your system will hang. The same will happen on Linux but then you can just disable the X server or have one card providing your graphical display and another one as your parallel co-processor.
Re: (Score:1)
Are you certain this is the case?
I'm curious because ATI/AMD appear to have solved that problem, in that I can run the Folding@Home GPU client and my displays still run. I'm running Windows 7 with Aero, so it's hitting the GPU not the CPU for my displays.
Re:Nice, but... (Score:4, Informative)
Folding@Home runs its computations in short bursts. gustgr is talking about a single computation kernel that takes more than 5-6 seconds.
Re: (Score:2)
Thanks for the clarification.
Re:Nice, but... (Score:4, Informative)
He's not talking about how long the app itself runs, but how long each subroutine that runs on the GPU runs before returning something back to the app on the CPU side. If that subroutine takes too long to complete windows gets unhappy. I don't remember if it was a watchdog timer thing or a bus-locking thing or something else. I don't even know if its been fixed or not.
Re: (Score:2)
Thanks for the clarification, as well.
Re: (Score:3, Informative)
Presumably it's some kind of issue with CUDA because running code on ATI GPUs does not seem to have this problem. Also, multiple GPUs are supported by apps like Elcomsoft's Wireless Password Recovery on Windows.
It should be fixable anyway, since modern GPUs are massively parallel and desktop stuff only needs only a fraction of the available processing, even if it's just a case of setting a few stream processors aside.
Re: (Score:3, Informative)
I know you are trolling...
No, he's joking. Stop crying troll when there's not even a hint of troll, for God's sake.
...but actually CUDA applications work better on Linux than on Windows.
Read carefully. He said "does it run Linux?", not "does it run on Linux?". Overused slashdot meme it might be, but the joke still went miles above your head.
Re: (Score:2)
Re:Nice, but... (Score:4, Funny)
Well, everywhere else in the world, Linux runs the CUDA Toolkit [nvidia.com], so I can imagine that in Soviet Russia, a Beowulf cluster of Nvidia cards run Linux.
Re:Nice, but... (Score:5, Insightful)
Does it matter? Linux is not anywhere close to the target market,
Linux support for CUDA matters hugely, Linux boxes are head and shoulders above any other market for CUDA-based software. That's because linux is the OS for supercomputing nowadays and CUDA's biggest niche is the exact same kind of number crunching that is typically associated with supercomputer workloads.
In fact, these GPUs are yet another example of how there is nothing new under the sun. A GPU is very much like the vector processor of Cray-style supercomputing (when Cray was still alive that is) aka SIMD (single instruction, multiple data). [wikipedia.org]
Re: (Score:1)
Uhh...Cray [cray.com] is still very much alive. And doing vectors. And threads. And multicore. All long before Intel/AMD.
Re:Nice, but... (Score:5, Informative)
Uhh...Cray is still very much alive. And doing vectors. And threads. And multicore. All long before Intel/AMD.
Seymour Cray was killed by a speeding redneck in a trans-am in 1996.
The company currently known as Cray as formerly known as TERA, which bought the assets of Cray Research from SGI who acquired Cray Research after Seymour had left to form Cray Computer which is also defunct.
Seymour was never significantly involved in multi-core or multi-threaded processors or NUMA. In fact, he specifically avoided designs even hinting of that sort of complexity because he felt that simplicity in design made it easier to fully utilize the maximum performance of the hardware.
Re: (Score:2, Funny)
Seymour Cray was killed by a speeding redneck in a trans-am in 1996.
Well, at least it wasn't a speeding redneck in a 'cuda. ;)
Re: (Score:2)
You're making the mistake of equating a company's products with one person. It doesn't work that way.
No, YOU are making that mistake. It was quite clear from my original wording that I was talking about Seymour Cray.
I wrote:A GPU is very much like the vector processor of Cray-style supercomputing (when Cray was still alive that is)
Re: (Score:2)
No you AC idiot that has nothing to do with what I said. What I said was that the courts granted corporations human rights. They have both limited liability and the right to speech vs. being a regulated industry. If the government can demand to walk in and look at your books anytime they want then they are less likely to pull Enron style shit. For instance coal mines are "heavy regulated industry" meaning that the government can walk in anytime they want and see what is going on. Corporation were originaly
MIMD (Score:2)
Re:Nice, but... (Score:5, Interesting)
In fact, these GPUs are yet another example of how there is nothing new under the sun. A GPU is very much like the vector processor of Cray-style supercomputing (when Cray was still alive that is) aka SIMD (single instruction, multiple data). [wikipedia.org]
Actually, not quite. The execution architecture in the Nvidia's G80 series GPUs and onwards is actually SIMT, single instruction multiple threads. The not so subtle difference here is that in a SIMD vector architecture the application explicitly manages instruction level divergence which will generally narrow the SIMD width of divergent paths to only 1 path, whereas in a SIMT architecture when threads diverge within a warp all divergent threads executing the same branch within that warp can be issued an instruction simultaneously, with the threads that are not on that branch within that warp inactive for that cycle. This is transparent to the application. Currently in Nvidia's latest architecture the warp size is still statically set at 32 threads so you'll see performance penalties when threads within any warp diverge proportional to the number of unique paths taken. Interestingly the next iteration of the hardware is rumored to feature a thread scheduler capable of variable warp sizes, probably still with some lower bound, but this would bring the GPU much closer to the ideal "array of independently executing processing cores" that we have in modern CPUs, but with obviously far more cores.
Re:Nice, but... (Score:4, Insightful)
Gamers, certainly, most likely have Windows systems. Workstation applications are likely a good chunk of Windows, with a slice of Mac, and some Linux.
Bulk crunching, though, which is where CUDA might make NVIDIA some real money, is overwhelmingly Linux based. Linux is, by a substantial margin, the obvious choice for big commodity clusters.
The war begins. (Score:2, Interesting)
With NVIDIA slowly pushing it's way into the CPU market (CUDA is the first step, in a few years I wouldn't be surprised if Nvidia started developing processors) and Intel trying to cut into NVidia's GPU market share with Larrabee http://en.wikipedia.org/wiki/Larrabee_(GPU) [wikipedia.org], we'll see who can develop outside of their box faster. This is good news for AMD since Intel will be more focused on Nvidia instead of being neck to neck with them in the processor market. Hey, maybe AMD will regain it's power in the se
Re: (Score:2, Interesting)
It's going to be interesting to see how Larrabee and AMD's Fusion battle it out. With Larrabee, Intel is taking a tightly integrated approach. One can easily imagine that LRBni will be integrated into mainstream CPUs in the not-so-distant future, at which point Intel will argue that no one needs a GPU.
AMD, on the other hand, is taking he approach of (relatively) loosely-coupled specialized processors. One, the CPU, for general-purpose/integer/branchy code and the GPU for graphics (and HPC?).
Currently my
Re: (Score:2)
I'd honestly like to see the two work together to produce some sort of sickeningly powerful rendering setup.
A processor which was good at preprocessing a scene for maximum performance on the GPU hardware and built-in support for multiple display adapters, plus an on-board chip which handles outputting the resulting images via the digital-link-du-jour.
This sort of setup would mean that rather than having to update your GPUs every two years (you could just buy another one to run in parallel) - the graphics
Re: (Score:2)
Tied to a card (Score:5, Insightful)
What I don't understand is why people hype a technology that is tied to a specific manufacturer of card. If nvidia died tomorrow, we'd have a fair amount of code thats no longer relevant, unless there was some way to design cards that are CUDA-capable but not nvidia.
Also worth noting that I'd completely forgotten CUDA even ran on windows, as I've only heard it in the context of linux recently.
Re:Tied to a card (Score:5, Insightful)
OpenCL will hopefully help to set a solid ground for GPU and CPU parallel computing, and since it is not technically very different from CUDA, porting existing applications to OpenCL will not be a challenge. Nowadays with current massively parallel technology the hardest part is making the algorithms parallel, not programming any specific device.
Re: (Score:2)
This of course assumes that OpenCL is able to make a foothold and has support from the hardware and gets some software that really shows the improvements that other developers can get using it.
Without those it wont have enough traction/mindshare.
Re: (Score:1)
Re: (Score:2, Informative)
Re: (Score:1)
OpenCL is not open source, OpenCL is a specification for a CUDA-equivalent language and API. Drivers are still necessary, and will likely be produced by the makers of the graphics hardware (ATI, Nvidia, Intel). Open source drivers and compilers are certainly possible, but I wouldn't expect them to be equivalent to the closed source stuff for sometime yet.
Re:Tied to a card (Score:4, Informative)
Re: (Score:2)
Re:Tied to a card (Score:4, Informative)
But OpenCL is a specification, not an implementation. The only 3 implementations I'm currently aware of is Apple's (with Snow Leopard), AMD demoed implementation back in March, and Nvidia's beta implementation. So far none of those are open source. If you're aware of an open source implementation, please let me know I'm actually very interested in it, but have yet to locate one.
OpenCL is an Open Standard Compute Language (Score:5, Informative)
OpenCL is an Open Standard compute language which comprises:
If you're writing an OpenCL-aware device device driver for a GPU, you'll probably need to wait a bit for some open source examples. It's reasonably likely that there will be some included in Darwin [apple.com] (once updated for Snow Leopard).
Look to the LLVM [llvm.org] project (sponsored heavily by Apple and others) for an open source compiler which will (if it doesn't already) know about OpenCL.
It sounds like you might be looking for a higher level API which allows you to more easily use the OpenCL, or possibly for language bindings to Java or Python perhaps? I suspect you'll see those coming along, once Apple ships Snow Leopard, and people have a chance to kick the tires, and then integrate LLMV into their tool chains, extend various higher level API, bridge to Java and whatnot.
The earliest high level API to take easy and broad advantage of OpenCL will probably be from Apple, of course. They'll likely provide some nicely automatic ways to take advantage of OpenCL without programming the OpenCL C API directly. As a Cocoa programmer, you'll be using various high level objects, maybe an indexer for example, which have been taught new OpenCL tricks. You'll just recompile your program and it will tap the GPU as appropriate and if available. The Cocoa implementation is closed source, but people will see what's possible and emulate it in various open source libraries, on other platforms, for Java and other languages.
Here's a good place to start: OpenCL - Parallel Computing on the GPU and CPU [ucdavis.edu]. Follow up with a google search.
Re:Tied to a card (Score:4, Informative)
I hear this a lot in CUDA/GPGPU-related threads on slashdot, primarily from people who simply have zero experience with GPU programming. The bottom line is that in the present and for the foreseeable future, if you are going to try to accelerate a program by offloading some of the computation to a GPU, you are going to be tying yourself to one vendor (or writing different versions for multiple vendors) anyways. You simply cannot get anything approaching worthwhile performance from a GPU kernel without having a good understanding of the hardware you are writing for. nVidia has a paper [nvidia.com] that illustrates this excellently, in which they start off with a seemingly good "generic" parallel reduction code and go through a series of 7 or 8 optimizations -- most of them based on knowledge of the hardware -- and improve its performance by more than a factor of 30 versus the generic implementation.
Another thing to keep in mind is that CUDA is very simple to learn as an API -- if you're familiar with C you can pick up CUDA in an afternoon easily. The difficulty, as I said in the previous paragraph, is optimization; and optimizations that work well for a particular GPU in CUDA will (or at least should) work well for the same GPU in OpenCL.
OpenCL - UnTied to a card (Score:2)
[ _Booming_ _Monster_ _Truck_ _Voice_]
Tap the hidden potential of your GPU! then you want OpenCL.
Re: (Score:3, Informative)
And as someone who has worked in GLSL (which is a similar level of abstraction as OpenCL) I can say you'll still see major differences even between cards from the same vendor.
I remember several minor tweaks in our code that gave 20% performance boosts on one card and 20% loss on another, and that was without ever actually getting into the assembler. Video games already often have largely different rendering paths for different cards when it comes to specific shader effects.
Re: (Score:2, Interesting)
In general, it's not tied to a card. CUDA itself might be NVIDIA-dependent, but general-purpose GPU programming is not, and other manufacturers will have similar interfaces to GP-GPU programming, eventually.
As for my own experience with it... everyone at work is going crazy over them. One of our major simulations implements a high-fidelity IR scene modeler. It used to take 2 seconds per frame on CPU-only. They re-wrote it with GPU and got it down to 12 ms.
Anything that is highly parallelizable with low
Re: (Score:1)
That's where abstraction and specialization comes into play. After defining your algorithm for independent use, specialize and optimize it to exploit current or future hardware. This gives you a fallback for calculation, and extremely enhanced performance for the life and support of said hardware. And, as others have pointed out, it's a stepping stone to an OpenCL implementation, eventually giving you multiple vendors to rely on.
If NVIDIA goes out of business or drops support in two years, how much more wor
Re: (Score:3, Interesting)
How is this different than AMD-v, which Intel licenses for their virtualization (or maybe I'm confusing it with a64, which Intel licenses)?
Either way, if AMD "died tomorrow", the same thing would happen as would happen if Nvidia did: some other company, likely a previous competitor, would buy up the technology, and things would continue with barely a hickup.
A product or technology does not need to be open source or 'standards based' to gain wild adoption. Sometimes, a technology speaks for itself. After all
For folders (Score:4, Informative)
SETI? (Score:4, Informative)
Waste your GPU cycles on something more interesting than SETI...
http://www.gpugrid.net/
http://distributed.net/download/prerelease.php (ok, maybe that's less interesting...)
And why limit this discussion to CUDA? ATI/AMD's STREAM is usable as well...
http://folding.stanford.edu/English/FAQ-ATI
Re: (Score:1)
Science is a parasite (Score:1)
The same way the DoD payed for the Cray supercomputers, gamers are paying for the GPUs. Science dropped by and said thanks.
Re: (Score:2)
Science prefers you use the term "symbiot".
Parasite has a negative connotation.
Re: (Score:2, Informative)
Re: (Score:1, Funny)
Back? You've never been in a Physics department, have you? Fortran was never gone.
h.264 encoding (Score:5, Informative)
h.264 encoding didn't improve with more shaders for some of the results(like PowerDirector 7), because of the law of diminishing returns.
I remember reading about x264 when quad-cores were becoming common. It mentioned that if quality is of the utmost importance, you should still encode on a single core. It splits squares of pixels between the cores; where those squares connect there can be very minor artifacts. It smooths these artifacts out with a small amount of extra data and post processing; the end result is a file hardly 1-2% bigger than if encoded on a single core, but encoded roughly 4x faster.
Now, if we're talking about 32 cores, or 64, or 128, would the size difference be bigger than 1-2%? Probably. After a certain point, it would almost certainly not be worth it.
This is supported by Badaboom's results, where the higher resolution videos (with more encoded squares) seem to make use of more shaders when encoding, while most of the lower resolution vids do not. (indicating that some shaders may be lying idle)
What I'm curious about, is could the 9800GTX encode two videos at once, while the 9600GT could only manage one? ;)
I'm also curious why the 320x240 video encoded so quickly - but that could be from superior memory bandwidth, shader clockspeed, and some other important factor in h.264 encoding.
Take it with a grain of salt; I'm not an encoder engineer; just regurgitating what I once read, hopefully accurately. ;)
Re: (Score:3, Informative)
Say you wanted one core to start encoding at 0% and the other at 50% of the way into the movie. The core starting at 50% has to start compression without any of the learned patterns in the 0-50% range. In the example you gave one core encodes half the screen and the other core encodes the other half. If they are running in parra
Re: (Score:1)
I know almost nothing about data compression beyond the readme for pkzip. Are there really enough learned patterns in a video stream that would make a >1% difference in filesize if compressed in independent chunks? As far as I can reason it out, independent chunks would act like you'd just inserted an extra keyframe at the splitpoints.
Re: (Score:3, Informative)
Data compression is an inherantly serial operation. Parts of it can be done in parrallel but in general the way you compress the next bit is based on the patterns observed earlier.
Say you wanted one core to start encoding at 0% and the other at 50% of the way into the movie. The core starting at 50% has to start compression without any of the learned patterns in the 0-50% range. In the example you gave one core encodes half the screen and the other core encodes the other half. If they are running in parrallel the second core can't use the learnt patterns of the first unless it wants to wait for the first core to finish its current frame (thereby making it non-parrallel).
So you have a tradeoff. You can run everything serially, or you can accept that you'll miss a few observed patterns here and there and run more parrallel.
For usability (seeking through a video) no codecs worked based on a learned pattern. The memory requirements to make use of this would be astronomical (you'd have to store the entire file in RAM, good luck doing that with a BluRay).
IIRC, the furthest back any codec looks is something like 24 frames.
Re: (Score:2, Informative)
For video encoding there is a ton of work that can be done in parallel. You can compute all of the dct's for all of the macroblocks in parallel. You can run your motion search for every block in parallel.
Re: (Score:3, Informative)
This is one of the most inane thought patterns I have yet to witness this week.
The reason is simple: Fine, so you've split a process into chunks and distributed them across two or more cores. But it's not exactly like those cores are working in a vacuum; they all use the same RAM.
As another reply has stated, codecs don't work quite how you describe -- they don't use the entire media as a reference, but at most a couple of dozen frames. But even if such mythological technology were really in use: There's
Re:h.264 encoding (Score:5, Informative)
If i'm encoding a signal in realtime from TV i have to start encoding at 0% onwards. The only way to parallelize it is to split the individual frames up into boxes (as done by the Badaboom).
Well, it works awesome if your problem is parellel (Score:5, Interesting)
The Tesla 1060 is a video card with no video output (strictly for processing) that has something like 240 processor cores and 4 GB of DDR3 RAM. Just doing math on large arrays (1k x 1k) I get a performance boost of about a factor of forty over a dual core 3.0 GHz Xeon.
The CUDA extension set has FFT functionality built in as well, so it's excellent for signal processing. The SDK and programming paradigm is super easy to learn. I only know C (and not C++) and I can't even make a proper GUI, but I can make my array functions run massively in parallel.
The trick is to minimize memory moving between the CPU and the GPU because that kills performance. Only the brand newest cards support functionality for "simultaneous copy and execute" where one thread can be reading new data to the card, another can be processing, and the third can be moving the results off the card.
One way that the video people can maybe speed up their processing (disclaimer: I don't know anything about this) is to do a quick sweep for keyframes, and then send the video streams between keyframes to individual processor cores. So instead of each core gets a piece of the frame, maybe each core gets a piece of the movie.
The days of the math coprocessor card have returned!
Re: (Score:2, Interesting)
~8 hr on a Core 2 Duo
~1.5 hr on Core i7
seconds on Tesla
Re: (Score:3, Informative)
Well I didn't say my code was *well* written. Apparently there's a lot of trickery with copying global memory to cached memory to speed up operations. Cached memory takes (IIRC) one clock cycle to read or write, and global GPU memory takes six hundred cycles. And there's all this whatnot and nonsense about aligning your threads with memory locations that I don't even bother with.
Re: (Score:3, Informative)
Re: (Score:2, Interesting)
I assume that's what the parent meant.
As an addendum, the newest CUDA 2.2 (with chip of the newest generation, i.e. GT200) actually has support for reading directly from (page-locked) host memory inside of GPU kernels... something I believe ATI cards have allowed for a while.
Re: (Score:2)
Is that for single or double precision work? Which Xeon exactly? Which compiler? How was the code written for the compiler? Which compiler flags?
Although I don't dispute your claims, writing to get max performance out the newer xeons is *hard* and you need to be very careful. The 256 bit wide registers on the 54xx can be extremely handy for codes written the right way.
I currently have a client that needs to run a lot of this [thegpm.org] and so far, I have the single cpu version running 10x faster than the parallel
OpenCL? (Score:2)
I thought Nvidia was indicating they were going to move to supporting OpenCL, or are the simply planning to support multiple technologies?
Re: (Score:3)
Both, I'd guess. If someone releases some killer software for OpenCL they'd be made not to - Apple are pushing it for OS X.
On the other hand, if they do a deal with someone to write CUDA stuff, it's lock-in that you must buy an nvidia card.
Either way they win...
Re: (Score:2)
They also have control over adding features to CUDA relatively rapidly as hardware gains new capabilities, which they can't easily do with OpenCL.
Re: (Score:1)
CUDA and OpenCL are not exclusive, they're at different layers in the driver stack. If you look at the NVIDIA slides, you'll see that C, OpenGL, DX11 Compute, and Fortran are all just frontend languages that compile to/run on top of CUDA.
Re: (Score:2)
I remember reading the OpenCL announcement (I like to pretend that I know what I'm talking about in programming matters) and Nvidia did indeed say that they would be supporting it.
What About Multiple GPU Cards in 1 Host? (Score:3, Insightful)
Those benchmarks show that even older ($120-140) nVidia GPU cards can really speed up some processing tasks, especially transcoding video. But what I think is even more exciting than just the acceleration from offloading CPU to GPU is using multiple GPU cards in a single host PC. Stuff a $1000 PC with $1120 in GPUs (like 8 $140 nVidia cards), and that's 1024 parallel cores, anywhere from 16x to 56x the performance at only just over double the price. PCI-e should make the data parallel fast enough to feed the cards. I bet that 8 $1000 cards stuffed into a $1000 PC would be something like 200x to 4000x for only 9x the price.
So what I want to see is benchmarks for whole render farms. I want to see HD video transcoded into H.264 and other formats simultaneously on the fly, in realtime, with true fast-forward, in multiple independent streams from the same master source. This stuff is possible now on a reasonable budget.
Re: (Score:2)
Cool. Sign me up.
Just one problem: Where can I find a $1000 PC with 8 available PCI Express x16 slots? The best machine I have at the moment only has three, and 8 won't even fit into a normal ATX case.
Re:What About Multiple GPU Cards in 1 Host? (Score:4, Interesting)
Those benchmarks show that even older ($120-140) nVidia GPU cards can really speed up some processing tasks, especially transcoding video. But what I think is even more exciting than just the acceleration from offloading CPU to GPU is using multiple GPU cards in a single host PC. Stuff a $1000 PC with $1120 in GPUs (like 8 $140 nVidia cards), and that's 1024 parallel cores, anywhere from 16x to 56x the performance at only just over double the price.
Your passwords are no longer safe.
It used to require days for a cluster of PCs to brute force an 8+ character password.
Now with a big enough PSU, you can stuff a tower with graphics cards to get it done in hours.
About the only common hash I can't find a CUDA enabled brute forcer for is NTLM2
Re: (Score:2)
My password is probably safe. It might take hours to crack a single password, but what are the odds that it will be my password, of all the billions of them in use now, of all the dozens of passwords I use, each different?
Re: (Score:2, Interesting)
Re:Tom's Hardware (Score:4, Insightful)
To be honest, it's all about advertising.
C'mon, 15 pages? You wonder why few of us ever RTFA...
Make Slashdot linked articles direct to a single page version, with maybe a handful of ads, and we may stick around and look at the rest of your site. Otherwise, it's potentially 1 million readers who may not bother clicking the URL, or just skip to the conclusion and miss the point of the article - perhaps hurting sales of advertised nvidia cards, the crux of the article's technology.
Re: (Score:1, Informative)
Re: (Score:1)
Re: (Score:1)
Re: (Score:1)
Re: (Score:1)
Re: (Score:1)
If I won one of the PC's, I wouldn't use it. It would be placed on a glass shelf in my room and if someone goes near it, I release the hounds :)
Re:Tom's Hardware (Score:4, Insightful)
Definitely YES, if it's an article worth viewing. I mightn't think I'm interested in a topic, only to find I am. :) Clicking a link after a screen only disrupts one's concentration, while the next page loads, when most of us just use a scroll wheel. And as far as revenue goes, you can fill an entire sidebar with ads, if lost advertising is a concern...
And to whoever moderated his post a troll, get a life. He's trying to improve the experience for us readers and we should encourage dialog...
Re: (Score:3, Insightful)
I'll pass this feedback along to the design guys, but do you *really* want to scroll through 4,000 words and 50-some charts, rather than looking at just the pages you're interested in reading?
Yes, I do. I can scroll just fine thank you and I can also use the browser's built in word search to find specific words anywhere in the current page, but I can't do that and stay sane at the same time if I have to click 15 times and search 15 times for each word I might want find.
Surely the length would be a bigger problem if there wasn't an index, right?
Put the index in a sidebar or at the top of the single page. HTML has had document internal anchor points since pretty much day 1.
Re: (Score:2, Interesting)
Re: (Score:3, Insightful)
What's with only allowing registered users access to the print version? I pretty much gave up on being able to read the article after seeing that.
Re: (Score:2)
I'll pass this feedback along to the design guys, but do you *really* want to scroll through 4,000 words and 50-some charts, rather than looking at just the pages you're interested in reading?
Absolutely. I'd much rather load a page once and then read it in one go, then get stuck in a cycle doing load, read, load, read, load, read - all those loads interrupt workflow and make it much more likely for me to go do something else.
Re: (Score:2)
Are you kidding? Do you *really* want to wait 3-5 seconds for each page to load, after you spend 1-3 seconds interacting with the tiny little UI element in only a few spots on the page, or do you just want to hit Page-down again? Seriously, go to a nearby internet cafe and ask to watch someone pull up your site over the net without ad filters. Worse, read your article a page at a time, using the index to jump around. It's not the load speed that kills, even over cellular (help me!) but the latency between e
Re: (Score:2)
Re: (Score:2)
Yeah, I know. While I could easily open each page in a new tab I'd rather go to a site that caters to me and load an article in each tab.
I'm asking if he seriously thinks reading it like that is any good though. My guess is he reads their articles from the office mainly, and occasionally at that, and has no idea of the typical user experience.
Re: (Score:2)
I do. I totally prefer that and have even read full length books in both my computer and cell phone in this way.
However, there's a school of 'design' for lazy readers that treat everything I like as 'the ugly wall of text'.
Anywhere I find a long text the wall-of-text comment appears, no matter how well the paragraphs are formatted.
So in this case it seems you really can't please everybody.
Re: (Score:1)
There were ads?
Re: (Score:2)
Re: (Score:3, Informative)
If you absolutely need this type of wandering off to have more pages and more clicks to survive on the web, then I'm concerned your site may not last for very long. I personally love the site, but these 15-page wonderings off the subject drive me fucking nuts.
Re: (Score:2)
I'd welcome the opportunity to prove otherwise. I've been managing editor for the last year, and much has changed. Best, Chris
Tom's Hardware has been the best consistent site that I've gone to for the past four video cards I've bought (spanning many years). I'm happy with their benchmarks, more or less. I can deal with the 15 pages per article, but I am not impressed with that aspect.
Re:Tom's Hardware (Score:4, Insightful)
Here's why you're proven to be a money-hatted site.
Advertising bandwidth versus actual article content bandwidth. Your advertising uses up about 2500% more bandwidth than the actual article content.
You care more about advertising than you do about content. That's why you split everything up into so many pages that I could have done in less than two, single-spaced, 20 point font.
Re: (Score:2)
Re: (Score:2)
Actually, you're just revealed yourself to be that fanboy. I never mentioned "cards", and can only assume you are referring to graphics cards.
Their entire site is filled to the brim with ads.
20 "page" reviews filled with copy-and-pasted marketing bullshit from the press kits.
They test cherry-picked hardware samples no mere mortal will ever get to touch.
You assume I give a fucking shit about AMD/Intel or nVidia/ATi or whatever other drama there is. I give a shit about honest reviews with real products (not
Re:Tom's Hardware (Score:5, Funny)
Totally not a biased, money-hatted site. Totally. Trust us.
Hi! You must be new to the internet as well as Slashdot, let me give you some tips.
1. Always use the word "lunix" in place of "linux" in slashdot's discussion forums.
2. You can steal mod points by copying someone else's insightful comment and pasting it as a reply to an earlier one.
3. Mac users are a bunch of fucking queers.
4. When there's something you need to do that can't be done with Windows but can be done with Lunix, keep in mind that you can do an even better job with Mac OS X. Some argue that BSD can do it better but no one makes software for BSD since no one gives a flying fuck.
5. Adequacy.org was one of the best sites on the internet. Want to know if your sons a computer hacker? Click here! http://www.adequacy.org/stories/2001.12.2.42056.2147.html [adequacy.org]
Good luck, friend!
Ya... (Score:2)
That explains so much about me. Classic. Great link. ;-)
Re: (Score:2)
No wonder I'm always getting modded as Redundant -1.