NVIDIA Shaking Up the Parallel Programming World 154
An anonymous reader writes "NVIDIA's CUDA system, originally developed for their graphics cores, is finding migratory uses into other massively parallel computing applications. As a result, it might not be a CPU designer that ultimately winds up solving the massively parallel programming challenges, but rather a video card vendor. From the article: 'The concept of writing individual programs which run on multiple cores is called multi-threading. That basically means that more than one part of the program is running at the same time, but on different cores. While this might seem like a trivial thing, there are all kinds of issues which arise. Suppose you are writing a gaming engine and there must be coordination between the location of the characters in the 3D world, coupled to their movements, coupled to the audio. All of that has to be synchronized. What if the developer gives the character movement tasks its own thread, but it can only be rendered at 400 fps. And the developer gives the 3D world drawer its own thread, but it can only be rendered at 60 fps. There's a lot of waiting by the audio and character threads until everything catches up. That's called synchronization.'"
Where's the story? (Score:4, Informative)
Re:Dumbing down (Score:2, Informative)
CUDA ("Compute Unified Device Architecture"), is a GPGPU technology that allows a programmer to use the C programming language to code algorithms for execution on the graphics processing unit (GPU).
Uh, what a crap (Score:4, Informative)
But not if posted by The Ignorant.
What if the developer gives the character movement tasks its own thread, but it can only be rendered at 400 fps. And the developer gives the 3D world drawer its own thread, but it can only be rendered at 60 fps. There's a lot of waiting by the audio and character threads until everything catches up. That's called synchronization.
If a student of mine wrote this, a Fail will be the immediate consequence. How can 400 fps be 'only'? And why is threading bad, if the character movement is ready after 1/400 second? There is not 'a lot of waiting'; instead, there are a lot of cycles to calculate something else. and 'waiting' is not 'synchronisation'.
[The audio-rate of 7000 fps gave the author away; and I stopped reading. Audio does not come in fps.]
While we all agree on the problem of synchronisation in parallel programming, and maybe especially in the gaming world, we should not allow uninformed blurb on Slashdot.
CUDA is limiting, not liberating (Score:5, Informative)
Re:I don't understand the point of this article. (Score:3, Informative)
An inch is a long way on a CPU. A Core 2 die is around 11mm along the edge, so at 12GHz a signal could go all of the way from one edge to the other and back. It uses a 14-stage pipeline, so every clock cycle a signal needs to travel around 1/14th of the way across the die, giving around 1mm. If every signal needs to move 1mm per cycle and travels at the speed of light, then your maximum clock speed is 300GHz.
Of course, as you say, electric signals travel a fair bit slower in silicon than photons do in a vacuum, and you often have to go a quite indirect route due to the fact that wires can't cross on a CPU, so the practical speed might be somewhat lower.
Re:Where's the story? (Score:3, Informative)
You usually have a game-physics engine running, which practically integrates the movements of the characters (character movement) or generally updates the world model (position and state of all objects). Even without input, the world moves on. The fixed rate is usually taken, because it is simpler than a varying time-step rate.
-What's so special about the audio thread? Shouldn't it just handle events from other threads without communicating back?
Audio is the most sensible thing to timing issues: Contrary to video (or simulation), you cannot drop arbitrary pieces of sound without the user immediately noticing.
-How do semaphores affect SMP cache efficiency? Is the CPU notified to keep the data in shared cache?
Not specially, they are simply a special case of the problem: How to access data
Several threads may compete for the same data, but if they are accessing the same data in one cache-line, it will lead to lots of communication (thrashing the cache).
In CUDA, a thread-manager is aware of the memory layout and will decide, which parts of memory will be processed by which shaders/ALUs/CPUs. Thereby, it is also possible to make more efficient use of the caches.
-What is a "3D world drawer"? Is it where god keeps us in his living room?
Drawer as in "someone, who draws", or 3D world painter. It draws/paints the state of the world as updated by the simulation thread.
This can happen asynchronously, as you will not notice, if a frame is dropped occasionally.
The EETimes article is much better (Score:4, Informative)