NVIDIA Shaking Up the Parallel Programming World 154
An anonymous reader writes "NVIDIA's CUDA system, originally developed for their graphics cores, is finding migratory uses into other massively parallel computing applications. As a result, it might not be a CPU designer that ultimately winds up solving the massively parallel programming challenges, but rather a video card vendor. From the article: 'The concept of writing individual programs which run on multiple cores is called multi-threading. That basically means that more than one part of the program is running at the same time, but on different cores. While this might seem like a trivial thing, there are all kinds of issues which arise. Suppose you are writing a gaming engine and there must be coordination between the location of the characters in the 3D world, coupled to their movements, coupled to the audio. All of that has to be synchronized. What if the developer gives the character movement tasks its own thread, but it can only be rendered at 400 fps. And the developer gives the 3D world drawer its own thread, but it can only be rendered at 60 fps. There's a lot of waiting by the audio and character threads until everything catches up. That's called synchronization.'"
need some brains (Score:2, Funny)
Re: (Score:2, Funny)
Re: (Score:1)
Re: (Score:1, Offtopic)
I got better.
Dumbing down (Score:5, Funny)
Wow, I bet nobody on slashdot knew that!
Re: (Score:1)
If you rtfa you'll notice that it's about "Nvidia's CUDA system, originally developed for their graphics cores, are finding migratory uses into other massively parallel computing applications."
Re: (Score:2, Informative)
CUDA ("Compute Unified Device Architecture"), is a GPGPU technology that allows a programmer to use the C programming language to code algorithms for execution on the graphics processing unit (GPU).
Re: (Score:2)
HOW does CUDA make it easier? I'm very confident it's not because Nvidia hardware contains lots of stream processors.
Ohwell, guess I need to RTFA, an
Re: (Score:2)
So to put it another way, the big thr
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
If we partition the bank into two pools, with 64 processors accessing each pool then we have just cut the arbitration co
Re: (Score:3, Funny)
Re: (Score:2)
Re: (Score:2)
Why would anyone but a fairly advanced programmer be interested in the new fads in parallel programming ? Besides, the summary is misleading, giving the impression that multithreading is exclusive to multicore processors, which is false; it can give huge benefits in a s
Re: (Score:2)
IAACS, multi-threading and parallel processing are two different but related concepts. The hard part is coming up with a parallel algorithm for certain classes of problems, implementing low level syncronization is trivial by comparison. OTOH I've seen a lot of programmers stab themselves in the eye with forks.
Re: (Score:2)
Where's the story? (Score:4, Informative)
Re: (Score:3, Insightful)
Re: (Score:1)
Re: (Score:1, Offtopic)
Re: (Score:1, Offtopic)
Re: (Score:1)
Re: (Score:3, Funny)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
-Why would character movement need to run at a certain rate? It sounds like the thread should spend most of its time blocked waiting for user input.
-What's so special about the audio thread? Shouldn't it just handle events from other threads without communicating back? It can block when it doesn't
Re: (Score:3, Informative)
You usually have a game-physics engine running, which practically integrates the movements of the characters (character movement) or generally updates the world model (position and state of all objects). Even without input, the world moves on. The fixed rate is usually taken, because it is simpler than a varying time-step rate.
-What's so special about the audio
Re: (Score:2)
Also, the article would've done better just talking about the thread manager you mention. That makes more sense than the stuff about semaphores affecting performance positively (unless I misunderstood the sentence about the cache no longer being stale).
And, uh, that drawer comment was a joke...
Re: (Score:2)
Not specially, they are simply a special case of the problem: How to access data
Several threads may compete for the same data, but if they are accessing the same data in one cache-line, it will lead to lots of communication (thrashing the cache).</blockquote>
I think you have this wrong. Sharing data in one cache line between processors is not always bad. In fact in multicores this c
Re: (Score:2)
It does nothing to solve the synchronization issues that are the plague of multi-threaded programming, and it makes it all worse by having a very non-uniform memory access model (that hasn't even been abstracted).
The problem with multi-threaded models is that they are fundamentally harder than a single-threaded model. CUDA does nothing to address this, and it makes it even harder by forcing the programmer to worry about what kind of memory they are using and forcing them to move data in an
Thats.. (Score:5, Funny)
Aritificial Intelligence. (Score:1)
In essence, the faster your CPU then, (static on consoles), the more time you can devote to making your game objects smarter after you're done the audio visual.
Re: (Score:2)
But we already know it's hard to split up all kinds of work evenly.
Anyway, what does CUDA to help with that?
CUDA helps by... (Score:2)
Um, no, that can't be right...
Re: (Score:1)
CUDA = NVIDIA desperate to compete with Intel? (Score:5, Insightful)
First of all, there are very few general purpose applications that special purpose NVIDIA hardware running CUDA can do significantly better than a real general purpose CPU, and Intel intends to cut even that small gap down within a few product cycles. Second, nobody wants to tie themselves to CUDA when it's built entirely for proprietary hardware. Third, CUDA still has a *lot* of limitations. It's not as easy to develop a physics engine for a GPU using CUDA as it is for a general purpose CPU.
Now, I haven't used CUDA lately, so I could be way off base here. However, multi-threading isn't the real challenge to efficient use of resources in a parallel computing environment. It's designing your algorithms to be able to run in parallel in the first place. Most multi-threaded software out there still has threads that have to run on a single CPU, and the entire package bottlenecks on the single CPU running that thread even if other threads are free to run on other processors. This sort of bottleneck can only be avoided at the algorithm level. This isn't something CUDA is going to fix.
Now, I can certainly see why NVIDIA is playing up CUDA for all they're worth. Video game graphics rendering could be on the cusp of a technological singularity. Namely, ray tracing. Ray tracing is becoming feasible to do in real time. It's a stretch at present, but time will change that. Ray tracing is a significant step forward in terms of visual quality, but it also makes coding a lot of other things relatively easy. Valve's recent "Portal" required some rather convoluted hacks to render the portals with acceptable performance, but in a ray tracing engine those same portals only take a couple lines of code to implement and have no impact on performance. Another advantage of ray tracing is that it's dead simple to parallelize. While current approaches to video game graphics are going to get more and more difficult to work with as parallel processing rises, ray tracing will remain simple.
The real question is whether NVIDIA is poised to do ray-tracing better than Intel in the next few product cycles. Intel is hip to all of the above, and they can smell blood in the water. If they can beef up the floating point performance of their processors then dedicated graphics cards may soon become completely unnecessary. NVIDIA is under the axe and they know it, which might explain all the recent anti-Intel smack-talk. Still, it remains to be seen who can actually walk the walk.
Re: (Score:2)
First of all, there are very few general purpose applications that special purpose NVIDIA hardware running CUDA can do significantly better than a real general purpose CPU, and Intel intends to cut even that small gap down within a few product cycles.
That's not strictly true. Off the top of my head: Sorting, FFTs (or any other dense Linear Algebra) and Crypto (both public key and symmetric) covers quite a lot of range. The only real issue for these application is the large batch sizes necessary to overcome the latency. Some of this is inherent in warming up that many pipes, but most of it is shit drivers and slow buses.
The real question is what benefits will CUDA offer when the vector array moves closer to the processor? Most of the papers with the abo
Re: (Score:2)
The advantages would be (assuming this is the wonderful solution it claims) you run your task in the CUDA environment, if your client only has a pile of 1U racks then he can at least run it, if he replaces a few of them with some Tesla [nvidia.com] racks, things will speed up a lot.
I did some programming at college, I do not claim to know anything about the workings of Tesla or CUDA, but it sure sounds rosy if
Re: (Score:2)
"I don't see CUDA becoming big in gaming circles anytime soon."
"Third, CUDA still has a *lot* of limitations. It's not as easy to develop a physics engine for a GPU using CUDA as it is for a general purpose CPU."
Guess we'll see.
http://en.wikipedia.org/wiki/PhysX [wikipedia.org]
Re: (Score:2)
Re: (Score:2)
IOW Ray tracing is not a golden bullet. It handles dynamic scenes poorly and needs an exponential raise in the number of rays as quality is improved. Pixar
Re: (Score:2)
You call pixar scenes with tens of thousand of objects simple? Pixar has shown of the most complex off line rendering I've seen and you consider that too simpl
Re: (Score:2)
Re: (Score:1)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Oversimplifying is bad (Score:1, Insightful)
Re: (Score:2, Insightful)
just hype and commercialism (Score:1)
Er. (Score:1)
In real application, the audio/video must be calculated for many of objects, and it is a static 30 or 60 fps video, and always static samples per second audio, perhaps cd quality 44100 samples per second but likely less.
This synchronization is not unsolved. Every slice of game time is divided between how many $SampleRate frames of audio divided by game objects producing audio, and how
I don't understand the point of this article. (Score:1)
Re: (Score:1)
At least that is the idea I had while reading it, I wasn't thinking about running other cpu intensive PC apps at the same time as a game.
Re: (Score:1)
But you can't have a 12GHz, at that speed light goes about ONE INCH per clock cycle in a vacuum, anything else is slower, signals in silicon are a lot slower.
So much slower that a modern single core processor will have a lot of "execution units" to keep up with the instructions arriving at the 3GHz rate these instructions are handed off to the units in parallel and the results drop out of the units "a few" clock cycles later. This is good except when the result of UnitA is needed before UnitB can start.
Re: (Score:3, Informative)
But you can't have a 12GHz, at that speed light goes about ONE INCH per clock cycle in a vacuum, anything else is slower, signals in silicon are a lot slower.
An inch is a long way on a CPU. A Core 2 die is around 11mm along the edge, so at 12GHz a signal could go all of the way from one edge to the other and back. It uses a 14-stage pipeline, so every clock cycle a signal needs to travel around 1/14th of the way across the die, giving around 1mm. If every signal needs to move 1mm per cycle and travels at the speed of light, then your maximum clock speed is 300GHz.
Of course, as you say, electric signals travel a fair bit slower in silicon than photons do
Re: (Score:2)
NVidia is doing that? an insult to INMOS... (Score:5, Interesting)
Many moons ago, when most slashdotters were nippers, a British company named INMOS provided an extensible hardware and software platform [wikipedia.org] that solved the problem of parallelism, in many ways similar to CUDA.
Ironically, some of the first demos I saw using transputers was raytracing demos [classiccmp.org].
The problem of parallelism and the solutions available are quite old (more than 20 years), but it's only now that limits are reached that we see the true need for it. But the true pioneers is not NVIDIA, because there were others long before them.
Re: (Score:3, Interesting)
Happy Days at UKC.
couldn't resist a quick Inmos story... (Score:5, Interesting)
New programming tools needed (Score:4, Insightful)
But why should I?
What is needed are new, high-level programming languages that figure out how to take a set of instructions and best interface with the available processing hardware on their own. This is where the computer smarts need to be focused today, IMO.
All computer programming languages, and even just plain applications, are abstractions from the computer hardware. What is needed are more robust abstractions to make programming for multiple processors (or cores) easier and more intuitive.
Re: (Score:1, Insightful)
Re: (Score:1)
Re: (Score:3, Interesting)
There are a couple of approaches that work well. If you use a functional language, then you can use monads to indicate side effects and the compiler can implicitly parallelise the parts that are free from side effects. If you use a language like Erlang or Pict based on a CSP or a
More investment needed in e.g Erlang (Score:4, Interesting)
The original Inmos Transputer was designed to solve such problems and relied on fast inter-processor links, and the AMD Hypertransport bus is a modern derivative.
So I disagree with you. The processing hardware is not so much the problem. If GPUs are small, cheap and address lots of memory, so long as they have the necessary instruction sets they will do the job. The issue to focus on is still interprocessor (and hence interprocess) links. This is how hardware affects parallelism.
I have on and off worked with multiprocessor systems since the early 80s, and always it has been fastest and most effective to rely on data channels rather than horrible kludges like shared memory with mutex locks. The code can be made clean and can be tested in a wide range of environments. I am probably too near retirement now to work seriously with Erlang, but it looks like a sound platform.
Re: (Score:2, Interesting)
You might be interested in some w
Yes, I read your paper (Score:3, Interesting)
Re:New programming tools needed (Score:5, Interesting)
Consider this parallel programing pseudo-example
find | tar | compress | remote-execute 'remote-copy | uncompress | untar'
This is a 7 process FULLY parallel pipeline (meaning non-blocking at any stage - every 512 bytes of data passed from one stage to the next gets processed immediately). This can work with 2 physical machines that have 4 processing units each, for a total of 8 parallel threads of execution.
Granted, it's hard to construct a UNIX pipe that doesn't block.. The following variation blocks on the xargs, and has less overhead than separate tar/compress stages but is single-threaded
find name-pattern | xargs grep -l contents-pattern | tar-gzip | remote-execute 'remote-copy | untar-unzip'
Here the message-passing are serialized/linearized data.. But that's the power of UNIX.
In CORBA/COM/GNORBA/Java-RMI/c-RPC/SOAP/HTTP-REST/ODBC, your messages are 'remoteable' function calls, which serialize complex parameters; much more advanced than a single serial pipe/file-handle. They also allow synchronous returns. These methodologies inherently have 'waiting' worker threads.. So it goes without saying that you're programming in an MT environment.
This class of Remote-Procedure-Calls is mostly for centralization of code or central-synchronization. You can't block on a CPU mutex that's on another physically separate machine.. But if your RPC to a central machine with a single variable mutex then you can.. DB locks are probably more common these days, but it's the exact same concept - remote calls to a central locking service.
Another benifit in this class of IPC (Inter Process Communication) is that a stage or segment of the problem is handled on one machine.. BUt a pool of workers exists on each machine.. So while one machine is blocking, waiting for a peer to complete a unit of work, there are other workers completing their stage.. At any given time on every given CPU there is a mixture of pending and processing threads. So while a single task isn't completed any faster, a collection of tasks takes full advantage of every CPU and physical machine in the pool.
The above RPC type models involve explicit division of labor. Another class are true opaque messages.. JMS, and even UNIX's 'ipcs' Message Queues. In Java it's JMS. The idea is that you have the same workers as before, but instead of having specific UNIQUE RPC URI's (addresses), you have a common messaging pool with a suite of message-types and message-queue-names. You then have pools of workers that can live ANYWHERE which listen to their queues and handle an array of types of pre-defined messages (defined by the application designer). So now you can have dozens or hundreds of CPUs, threads, machines all symmetriclly passing asynchronous messages back and forth.
To my knowledge, this is the most scaleable type of problem.. You can take most procedural problems and break them up into stages, then define a message-type as the explicit name of each stage, then divide up the types amongst different queues (which would allow partitioning/grouping of computational resources), then receive-message/process-message/forward-or-reply-message. So long as the amount of work far exceeds the overhead of message passing, you can very nicely scale with the amount of hardware you can throw at the problem.
Re: (Score:2)
Unix pipes are a very primitive example of a dataflow language [wikipedia.org].
Re: (Score:3, Insightful)
Re: (Score:2)
find | tar | compress | remote-execute 'remote-copy | uncompress | untar'
find -- you're sweeping the file system and comparing against rules. Maybe IO-driven CPU, at best.
tar -- You're appending a couple headers. No work.
compress -- OK, here there's a CPU bound.
remote-execute remote-copy -- Throwing stuff onto the network and pulling it off.
uncompress -- OK, more CPU bound.
untar -- Now you're adding files to the file system, but onl
Re: (Score:2)
When I came up through my CS degree, object-oriented programming was new. Programming was largely a series of sequentially ordered instructions. I haven't programmed in many years now, but if I wanted to write a parallel program I would not have a clue.
But why should I?
What is needed are new, high-level programming languages that figure out how to take a set of instructions and best interface with the available processing hardware on their own. This is where the computer smarts need to be focused today, IMO.
Crikey, when was your CS degree? Mine was a long time ago, yet I still learned parallel programming concepts (using the occam [wikipedia.org] language).
Re: (Score:2)
Re: (Score:2)
Uh, what a crap (Score:4, Informative)
But not if posted by The Ignorant.
What if the developer gives the character movement tasks its own thread, but it can only be rendered at 400 fps. And the developer gives the 3D world drawer its own thread, but it can only be rendered at 60 fps. There's a lot of waiting by the audio and character threads until everything catches up. That's called synchronization.
If a student of mine wrote this, a Fail will be the immediate consequence. How can 400 fps be 'only'? And why is threading bad, if the character movement is ready after 1/400 second? There is not 'a lot of waiting'; instead, there are a lot of cycles to calculate something else. and 'waiting' is not 'synchronisation'.
[The audio-rate of 7000 fps gave the author away; and I stopped reading. Audio does not come in fps.]
While we all agree on the problem of synchronisation in parallel programming, and maybe especially in the gaming world, we should not allow uninformed blurb on Slashdot.
Re: (Score:1)
Re: (Score:1)
Re: (Score:2)
How can 400 fps be 'only'?
You are responding to the following (hypothetical) statement:
but it can be rendered at only 400 fps
Which is different from the one written:
but it can only be rendered at 400 fps
See the difference?
yawn (Score:2)
Maybe nVidia will popularize parallel programming, maybe not. But I don't see any "shake up" or break throughs there.
Nvidia should just put out their own OS (Score:1)
Is this why there's no OpenGL 3.0? (Score:1)
It occurs to me that NVidia may not want OpenGL to succeed. Maybe they're holding up OpenGL development to give CUDA a place in the sun. Does anyone else get the same impression?
Re: (Score:2)
Look at the early OpenGL registry extension specifications - vendors couldn't even agree on what vector arithmetic instructions to implement.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:3, Insightful)
However, having mentioned Microsoft... If *someone* does want OpenGL to succeed it is them... If and when OpenGL 3.0 ever appears, I bet there will be some talk of some "unknown party" threatening patent litigation...
Destroying OpenGL is of paramount important to Microsoft,
Errata (Score:2)
But you knew that already.
CUDA is limiting, not liberating (Score:5, Informative)
Re: (Score:2)
Re: (Score:1, Interesting)
The EETimes article is much better (Score:4, Informative)
Blog spam. Link to actual article. Nvidia loss? (Score:3, Interesting)
Nvidia is showing signs of being poorly managed. CUDA [cuda.com] is a registered trademark of another hi-tech company.
The underlying issue is apparently that Nvidia will lose most of its mid-level business when AMD/ATI and Intel/Larrabee being shipping integrated graphics. Until now, Intel integrated graphics has been so limited as to be useless in many mid-level applications. Nvidia hopes to replace some of that loss with sales to people who want to use their GPUs to do parallel processing.
Re: (Score:2)
Who cares? Medical equipment != parallel computation.
No one is going to "solve" the problem (Score:2)
Multi-threaded programming is a fundamentally hard problem, as is the more general issue of maximally-efficient scheduling of any dynamic resource. No one idea, tool or company is going to "solve" it. What will happen is that lots of individual ideas, approaches, tools and companies will individually address little parts of the problem, making it incrementally easier to produce efficient multi-threaded code. Some of these approaches will work together, others will be in opposition, there will be engineer
Reminds me of OLD the stories I used to hear... (Score:4, Interesting)
As I recall:
The processor, as it was sending the data to the bus, would have to tell the memory to get ready to read data through these cables. The "cables hack" was necessary because the cable path was shorter than the data bus path, and the memory would get the signal just a few mS before the data arrived at the bus.
These were fun stories to hear but now seeing what development challenges we face in parallel programming multi-core processors gives me a whole new appreciation for those old timers. These are old problems that have been dealt with before, just not on this scale. I guess it is true what they say, history always repeats itself.
Re: (Score:2)
Parallelism hype... (Score:2)