Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Graphics Open Source Upgrades Hardware Linux

Ask Slashdot: GPU of Choice For OpenCL On Linux? 110

Bram Stolk writes So, I am running GNU/Linux on a modern Haswell CPU, with an old Radeon HD5xxx from 2009. I'm pretty happy with the open source Gallium driver for 3D acceleration. But now I want to do some GPGPU development using OpenCL on this box, and the old GPU will no longer cut it. What do my fellow technophiles from Slashdot recommend as a replacement GPU? Go NVIDIA, go AMD, or just use the integrated Intel GPU instead? Bonus points for open sourced solutions. Performance not really important, but OpenCL driver maturity is.
This discussion has been archived. No new comments can be posted.

Ask Slashdot: GPU of Choice For OpenCL On Linux?

Comments Filter:
  • AMD with the proprietary drivers is the OpenCL of choice for buttcoin miners.

  • Using the binary driver has been fine for me.

    Not much more to say on the matter. ffmpeg + x264 make use of it nicely.

  • by Anonymous Coward on Sunday January 25, 2015 @10:51AM (#48898415)

    They're too busy with CUDA to give two shits about decent OpenCL performance.

    That's why the HD Radeon series was the mining GPU of choice for Bitcoin.

  • by Anonymous Coward

    Intel is your best bet for a mature open sourced opencl compatible GPU, if performance doesn't matter that is..

  • by Anonymous Coward on Sunday January 25, 2015 @10:54AM (#48898437)

    The future of GPU's is open standards. GPU's won't take off until all major vendors support the latest (OpenCL 2.0) standards
    Here is the list of conformant products
    https://www.khronos.org/conformance/adopters/conformant-products#opencl

    • I greatly prefer open standards as well. However, CUDA is considerably less painful to work in than OpenCL. NVIDIA has also demonstrated more commitment to capturing GPGPU business than AMD. For example, the first supercomputer on top500.org with AMD GPUs ranks in at 94th. In contrast, NVIDIA GPUs are used in the 2nd ranked supercomputer. Xeon Phi is gaining in popularity, but Intel wants you to work in CilkPlus not OpenCL.

      That said, I believe the future is tight integration (i.e., cache coherence) bet

      • Re: (Score:2, Insightful)

        by Anonymous Coward

        I greatly prefer open standards as well. However, CUDA is considerably less painful to work in than OpenCL.

        I'm not sure how you came to this conclusion. I ported a debayering algorithm form CUDA to OpenCL and as far as the kernel code was concerned, the only thing I even had to change was the expressions which retrieve the local and group IDs. The housekeeping code required on the host side is totally different and slightly more complicated for OpenCL, but it's worth the tradeoff in order to be able to run your GPU code on AMD and Intel cards as well as NVidia.

        • by Anonymous Coward

          (I'm not the GP)

          Well, among other things:

          - CUDA is C++ish, OpenCL is (until now) C-ish. This has many implications, but no templates in OpenCL = pain, or macros. Macros = debugging pain.
          - With CUDA, you can just compile the damn code, no need for all the dynamic compilation mess (ok, it's not a mess - but it is when you don't need it to be dynamic). The other way around it's possible that OpenCL is the better alternative, but I've not had to compile dynamically yet.
          - This is more of a personal impression, b

        • and Intel cards as well

          Why would you do this? Wouldn't you be better off using the CPU in this case?

      • Between a bit better language design and superior support and tools, CUDA is way easier to do your work in. We've 4 labs that use CUDA in one fashion or another, none that use OpenCL. A number have tried it (also tried lines like the Cell cards that IBM sold for awhile) but settled on CUDA as being the easiest in terms of development. Open standards are nice and all but they've got shit to do and never enough time to do it, so whatever works the easiest is a win for them.

        On a different side of things, I've

      • by epine ( 68316 )

        NVIDIA has also demonstrated more commitment to capturing GPGPU business than AMD.

        FTFY.

        Once captured, twice cry.

    • by Fwipp ( 1473271 )

      GPUs have been doing just fine since the '90's.

  • nVidia Consumer Card (Score:2, Interesting)

    by Anonymous Coward

    I would go with an nVidia consumer card. They may be more expensive than the AMD ones. On the other hand, they offer CUDA and OpenCL support and are much faster.

    For the newer ones (GTX9xxx) you will need to wait a little bit until the driver shipped with CUDA actually supports the cards though.

    • I would go with an nVidia consumer card.

      ...On Linux?

      • As long as you're all right with proprietary drivers, NVIDIA's Linux driver is quite solid. It needs to be, as it is used in all of their supercomputers.

      • by jedidiah ( 1196 )

        Get back under your bridge... troll.

        • Get back under your bridge... troll.

          Thank you for your well-reasoned analysis of the problems with binary-only drivers on Linux, and why my misgivings about them arenot only unfounded but must be a case of arguing in bad faith. Your contribution to the discussion has enlightened us and enhanced the human condition.

      • they outperform windows on the same machine with binary drivers.

      • by aliquis ( 678370 )

        go with an nVidia

        ...On Linux?

        Definitely.

        • I picked up an nVidia GTX 970 about a month ago, and though I had to tinker a little bit with Debian to get it up and running, after I got the newest drivers installed it's been running rock solid and I haven't noticed much of a difference in performance between Debian and Windows 7 (Maybe 4 more fps in a game on windows where the game is running with the fps in the 290s. This wasn't an ideal test though because the renderer on windows was DirectX 9, while on Linux it was OpenGL). To get it going in Jess

          • I own a laptop with an ATi graphics chipset and their drivers are absolute garbage. Their Linux driver causes visual artifacts all the time on a composited GUI, and the machine to crashes on shutdown one out of 5 times with fglrx dumping core causing the machine to never shut off (and potentially turn my laptop bag into a toaster oven x_x). I guess I'm going to return to the open source radeon drivers now that I can scratch my gaming itch on the desktop.

            Your report just screams "I'm running an ancient kernel and distribution with an early, dodgy compositor." Try upgrading to current and report your results. To prove you're not a troll, post a bit of the oops message if you get a crash on an up to date system.

        • by JustNiz ( 692889 )

          I second that. AMD sucks hard on linux compared to nVidia.

  • by Anonymous Coward

    Nvidia only supports OpenCL as an aftertought, prefering as always to offer up their proprietary CUDA shit instead. So go for an AMD card.

  • by captnjohnny1618 ( 3954863 ) on Sunday January 25, 2015 @11:18AM (#48898581)
    I work in a lab that does CT image reconstruction (all gpgpu computing) as part of what we do. I've been the one to program it using OpenCL under Ubuntu (I insisted I use linux; windows was too infuriating) so I'll share my experience.

    I have two Nvidia 780 GPUs in my machine (an Alienware Aurora R4) and getting everything running under linux was actually much smoother than my initial attempt to get OpenCL running under Windows 8, so I don't think you'll have too much trouble there. I use the binary blob from Nvidia and it has been pretty stable with the occasional driver crash for whatever reason (maybe once in a six month period, but things just restart and it's fine. It's usually my fault for writing shitty code). I personally really like this setup and the only thing that could make it better would be more GPUs and a great, solid open source driver.

    I would say that if you're going to use Nvidia GPUs for GPGPU computing, consider learning CUDA. Syntactically it's very similar to OpenCL but the tools you have access to for debugging, profiling, and increasing performance as well as the overall stability of the programs seems to be much much better. I suppose we should expect that though from a proprietary language, on proprietary hardware, using a proprietary driver. I've heard that you can get better performance (read: speedups) using CUDA over OpenCL, but I've never tested that for myself, or seen proof firsthand.

    I've learned OpenCL, and I like it's portability and openness, but I look at some of the stuff my friends can do with CUDA and I can't say that I'm not envious. Mainly what I'm referring to is Nvidia's NSight program, which can do OpenCL if you're willing to pay for the "pro" edition. Also, Nvidia GPUs are scalar based, so if much of you speedup would come from using OpenCL's vector structures, that won't happen on Nvidia GPUs the same way that it would on AMD. Programming might be more convenient, but performance will stay the same.

    Hope that helps. Feel free to ask more questions.
    • by captnjohnny1618 ( 3954863 ) on Sunday January 25, 2015 @11:28AM (#48898639)
      Also, just to add, from everything I've read out there as of right now the consensus is that AMD's support for OpenCL is better than Nvidia. That being said, performance is dependent on a lot of things (the programmer, the algorithms used, how the problem is parallelized, etc.) and the raw power of Nvidia GPUs can, in some cases, despite "less support," still be better. Personally, I would choose Nvidia over AMD given the chance to choose again.
    • by Anonymous Coward

      I recommend the ocl-icd package to make it easy to switch OpenCL implementations on the fly. Also, download the Intel and AMD OpenCL runtimes which support CPU-based computation using SIMD instructions and multicore parallelism, and try them out as well as GPUs. You can then micro-benchmark your own algorithms on different vendor runtimes quite easily. I have found that the Intel OpenCL does a very decent job of auto-vectorization, so my scalar-based OpenCL code ran almost as fast as my hand-vectorized ve

    • Which company do you work for just out of curiosity?
      • I work for a lab at UCLA. Our group currently has no industry affiliation. Previously, although not since I have been part of the group, we received funding from Siemens.
      • Perhaps I should also add at this point that all comments and opinions are my own. I support Nvidia based on my own experience, but I receive no benefit from them as a result.
  • by iamacat ( 583406 ) on Sunday January 25, 2015 @11:21AM (#48898595)

    Integrated graphics in your CPU will have a modest performance but stable and open source OpenCL driver. If it proves too slow for your particular project, you will be able to compare benchmarks and get the cheapest card that is fast enough to, say, run your animation at 60fps. If you are planning to distribute your code, you will anyway need several GPUs to test with.

  • Comment removed based on user account deletion
    • Submitter here:
      I don't need 3D as in the sense of rasterizing triangles in 3D space.
      I need the GPU to do General Purpose computing, as clearly stated in the summary.

      In my case: I need to do a massive amount of intersection tests between rays and AABBs.

  • Don't know about pure OpenCL. But... Intel, has over all well support and not too bad compared to windows counterpart drivers. It lacks OpenGL 4 features. It only uses OpenGL3. AMD has open source drivers, which mostly suck. Performance in games is rather bad. Same as synthetic tests. Many people report about rather good bitcoin performance. Proprietary drivers are a bit better on performance but worse on stability. Nvidia, has 2 types of drivers. Reverse engineered ones, which suck and blow when it comes t
  • I have used 2 AMD cards programming OpenCl on Linux, a HD4650 and a HD7770. My 4650 card was obsoleted by the AMD proprietary drivers in 18 months, my HD7770 is being obsoleted (for new Linux and OpenCL support) by AMD as I write this, after about 2 years. This means if I want to keep doing OpenCl development, I have to use the old driver and old kernels, old xservers, and current version of OpenCl, etc.
    I don't think think I will buy AMD again for this reason. Nvidia doesn't obsolete their cards anywhere ne

    • The solution to this is to use the open source amd drivers.

      Opencl on it is only just starting to become mature, but it is where all the future development really is and is plenty usable for many current tasks.

  • by SoftwareArtist ( 1472499 ) on Sunday January 25, 2015 @01:47PM (#48899405)

    If you want to write modern OpenCL code and run it on a GPU, AMD is your only option.

    In terms of performance, NVIDIA is actually the best. But they've been stuck at OpenCL 1.1 for years, while everyone else has long since moved to newer versions. Until (if) they add OpenCL 2.0 support, they'll be a bad choice.

    Intel doesn't support running OpenCL on the GPU under Linux. See the chart at the end of https://software.intel.com/en-... [intel.com]. You can still write OpenCL programs, but you'll just be running them on your CPU.

    • by JustNiz ( 692889 )

      >> They've been stuck at OpenCL 1.1 for years

      That's because their own API (CUDA) is far better and more developed.

  • by rgbe ( 310525 ) on Sunday January 25, 2015 @02:42PM (#48899743)

    Have a look at this talk, namely 8 min 30 seconds into the talk:
      https://www.youtube.com/watch?... [youtube.com]

    The talk was given at the recent Linux Conf Australia (in New Zealand). It shows that AMD supports OpenCL 2.0, while Nvidia only support version 1.1 (released in 2010). I spoke to the speaker after his talk and he said Nvidia are basically dragging their heals with regard to supporting more recent versions. Nvidia also request unconvential features be put into the spec, and then never implement those features. Obvisouly Nvidia are doing well with their own CUDA language and seem to be trying to create a walled garden. It sounds like if you are going for openness and not for speed, then you could look at Intel or AMD (both support version 2.0).

  • I'm no serious parallel programmer, so I don't know how helpful this well be to you, but this might be of interest: 22-Way AMD/NVIDIA OpenCL Linux Benchmarks To Start Off 2015 [phoronix.com]. Mike Larabel does some great work.
  • by ravyne ( 858869 ) on Sunday January 25, 2015 @07:04PM (#48901201)
    As for a particular model, if double-precision performance is important, go with a 7970 or 280x on theAMD side (or 7990 if you need dual-gpu in one slot). They did double-precision at 1/4th their single-precision rate, which is the best you're going to find at consumer-grade pricing -- even more-modern or more powerful cards have backed off on double-precision, so something like a 290x has almost 50% more shader ALUs than a 280x, and will perform better at single-precision workloads, but only does double-precision at a rate of 1/8th, so its actually slower in purely double-precision workloads. All of nVidia's consumer cards are in the ballpark of 1/8th to 1/16th rate too, except the GTX Titan Black, which did 1/3rd rate, but at $1500 is nearly Quadro pricing anyways.

    If money is no object an AMD firepro 9100 is the workstation version of the 290x, and does double-precision at 1/2 single precision rate, and is the current best-of-both worlds, and will probably remain so for the remainder of the year, but its a 3-grand price tag or so.
  • That's really the question. Are you using the GPU for heavy-duty computing, or graphics, or...?

    We've got money around here (we're a civilian-sector US gov't agency) using NVidia Tesla cards - in several servers, *two* of 'em - for heavy lifting with things like R. We do use the installable proprietary drives, and they work.

                        mark

Technology is dominated by those who manage what they do not understand.

Working...