Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AMD Graphics Open Source Software Hardware

AMD: It's Time To Open Up the GPU (gpuopen.com) 152

An anonymous reader writes: AMD has called for the opening up of GPU technology to developers. Nicolas Thibieroz, a senior engineering manager for the company, announced today the launch of GPUOpen, its initiative to provide code and documentation to PC developers, embracing open source and collaborative development with the community. He says, "Console games often tap into low-level GPU features that may not be exposed on PC at the same level of functionality, causing different — and usually less efficient — code paths to be implemented on PC instead. Worse, proprietary libraries or tools chains with "black box" APIs prevent developers from accessing the code for maintenance, porting or optimizations purposes. Game development on PC needs to scale to multiple quality levels, including vastly different screen resolutions." And here's how AMD wants to solve this: "Full and flexible access to the source of tools, libraries and effects is a key pillar of the GPUOpen philosophy. Only through open source access are developers able to modify, optimize, fix, port and learn from software. The goal? Encouraging innovation and the development of amazing graphics techniques and optimizations in PC games." They've begun by posting several technical articles to help developers understand and use various tools, and they say more content will arrive soon.
This discussion has been archived. No new comments can be posted.

AMD: It's Time To Open Up the GPU

Comments Filter:
  • by xxxJonBoyxxx ( 565205 ) on Tuesday January 26, 2016 @05:05PM (#51376861)

    >> AMD: It's Time To Open Up the GPU

    Translation: quit optimizing for proprietary Intel technology, start developing and optimizing for AMD's proprietary technology, particularly LiquidVR, instead

    call me when Intel signs up otherwise meh

    • What? (Score:5, Interesting)

      by Anonymous Coward on Tuesday January 26, 2016 @05:17PM (#51376975)

      This has nothing to do with Intel and everything to do with getting people to adopt AMD's open standards over NVIDIA's closed standards, which is actually better for the health of the industry as a whole. What is it you expect Intel to sign up for? Their graphics products are garbage and absolutely nobody wants to use them.

      • In the graphics industry, technology consumers go where the performance is best - they arent going to drop performance just because the framework is open, so AMD had better be competitive on performance as well as open.

        • by KGIII ( 973947 )

          While that's likely true in the majority of cases, it's not entirely true - at least for some of us. But, at the same time, I'm not sure how much that matters.

          I tend to buy a lot of hardware. I'm not really sure why I do so these days. It's not like I get any real performance increases and it's not like I buy bleeding edge. I just refresh often, give away my older hardware, and like to play with new/different configurations at the bare metal level and not just at the VM level. I know, I know... It doesn't m

      • Re: (Score:2, Informative)

        by SuperKendall ( 25149 )

        Intel graphics products these days are pretty decent and many people already do use them. You are living in a fantasy if you think otherwise...

      • Re: (Score:2, Informative)

        and by "absolutely nobody" you actually mean "90% of the computing public that are fine with 'good enough for what I'm doing'"

        Here's a hint: You don't need a $400 GPU for Netflix.

        • by Anonymous Coward

          Here's a hint: You don't need a $400 x86 CPU for Netflix, either.

        • by aliquis ( 678370 )

          Here's a hint: You don't need a $400 GPU for Netflix.

          and regarding to what AC, http://hardware.slashdot.org/c... [slashdot.org], said:

          Here's a hint: You don't need a $400 x86 CPU for Netflix, either.

          AMD A10 7850K (3.7 GHz quad-core) with integrated graphics: $130.
          AMD Athlon II 860K (3.7 GHz quad-core) without integrated graphics: $75.

          Price difference for integrated graphics: $55

          And for whom want to play games: GTX 750Ti $105, which are of course are better added to the $75 rather than the $130 CPU.

        • by KGIII ( 973947 )

          Heh, I should have scrolled down. I made a bit of a long winded post about that and about how I'd buy it just 'cause it's open AND assuming it'd be likely to work properly without a problem. The thing is, for the most part, shit already all works without a problem. Linux, it just works. It's so different than what it used to be. Now? It generally just works. Maybe, maybe once in a while, do I need to tweak something, find and compile something with a patch, edit a config file in a text editor, or install th

      • by AmiMoJo ( 196126 )

        In fact Intel is already getting on board with AMD's open standards like FreeSync [wikipedia.org], avoiding Nvidia's proprietary version.

    • Re: (Score:1, Informative)

      by indi0144 ( 1264518 )
      How about they open a few QA jobs and don't make available the stillborn of a software that is "Crimson" I was a loyal AMD fanboy since the K6 era, And and ATI loyal from around the same time, but my last CPU from them was a Phenom, I got tired of waiting for a decent replacement and went with intel, now I'm trying to camp with this old HD7750 because I love the passive cooling and don't really need much video performance for games, but the drivers are making it hard for me to keep supporting them, (yay new
    • by ShanghaiBill ( 739463 ) on Tuesday January 26, 2016 @05:34PM (#51377141)

      Translation: quit optimizing for proprietary Intel technology

      This is not targeted at Intel. It is targeted at NVidia. They are looking to the future when GPGPU is expected to be a bigger slice of the GPU market. NVidia currently dominates with their (proprietary) CUDA interface, while AMD relies on the less efficient OpenCL. More openness will help AMD (and Intel) while working against NVidia.

      In almost all markets the dominant company will prefer to push their proprietary solution, while companies with smaller market shares will push openness.

      • by Anonymous Coward

        while companies with smaller market shares will push openness.

        Because they want to leverage free labor, then they will close it off. Notice how Google has done it on Android (play services, core apps going closed)?

        In any case this is now a challenge for the open source community, typically the open source drivers have been quite poor in comparison to the proprietary ones so now is the chance for the community to back their claims that open source is better.

    • Re: (Score:3, Informative)

      by CajunArson ( 465943 )

      What's all "proprietary" about Intel graphics exactly?
      Their open source support in the Linux kernel is substantially better than AMD's half-assed "sorta kinda open when we feel like it for obsolete parts" approach.

      Another thing about this "open" concept that nobody is really talking about amidst the cheerleading: AMD is trying to push *proprietary* AMD-only hardware features that don't fit very well with current APIs -- and that includes Vulkan, BTW, not just the old APIs.

      So what are they doing? They are "o

      • Another thing about this "open" concept that nobody is really talking about amidst the cheerleading: AMD is trying to push *proprietary* AMD-only hardware features that don't fit very well with current APIs -- and that includes Vulkan, BTW, not just the old APIs.

        Seems like you're thinking of Mantle (which is of course the basis for the Vulkan work) but Vulkan - like OpenGL - will be an open specification, not proprietary.

        The idea is not to fit with current APIs because current API paradigms are not well suited to modern hardware. For example a lot of the workload is serial and so cannot leverage any significant amount of the available CPU power, Vulkan (and DirectX 12) change this and allow multiple threads to do things like create and submit command buffers in parallel. Another part of it is that the resource management of the current high-level APIs is just an abstraction of the generic driver implementation so we see performance gains when the power of the hardware is exploited through explicit application-specific optimizations made in the driver by the vendor working with the game developer. Vulkan and DirectX 12 break from this by transferring the onus of low level resource management to the game developer so they can fully utilize the hardware in the way they need without being hamstrung by the driver.

      • What's all "proprietary" about Intel graphics exactly?
        Their open source support in the Linux kernel is substantially better than AMD's half-assed "sorta kinda open when we feel like it for obsolete parts" approach.

        Add to that, Intel publishes incredibly detailed ISA specs for their GPUs. They have some quite neat features in some of them, such as a two dimensional register file, where operands to instructions are identified by a base and a stripe size, so you can load four vectors horizontally (e.g. 4 pixel values) and then compute on them vertically (e.g. red values, green values, etc) without needing to do any vector shuffle operations. You can also do diagonal operations, which makes the code very dense as you r

    • by phorm ( 591458 )

      Well, in most senses one could hope that open != proprietary, in which case it should be something *all* vendors can standardize on.
      Better than three vendors with closed solutions.

    • Re: (Score:1, Flamebait)

      Alternate translation: Would somebody who knows what they're doing please fix our crappy drivers?

    • by RogueyWon ( 735973 ) on Tuesday January 26, 2016 @07:10PM (#51377837) Journal

      It's aimed at Nvidia, not Intel, and it's all about hair.

      Or rather, it's all about Nvidia GameWorks, which got a lot of attention this year thanks to a number of bleeding-edge games, most notably The Witcher 3.

      The horribly over-simplified tl;dr version is that Nvidia have been encouraging PC developers to use a set of closed-and-proprietary tools, which allow for some remarkably pretty in-game effects but more or less screw over AMD cards.

      This, combined with the fact that Nvidia has, in general, better driver support, quieter and more power-efficient cards and, at the top end of the market, better single-card performance, has put AMD into a pretty bad place in the PC graphics card market right now. Yes, they still tend to have a slight price-to-power ratio advantage, but the quality of life drawbacks to an AMD card, combined with the GameWorks effect, has driven down their market share and, right now, makes it hard to recommend an AMD card.

      There are no "goodies" or "baddies" here. Nvidia's GameWorks strategy is undoubtedly fairly dubious in terms of its ethics. At the same time, they are putting out better (and more power-efficient, so also on one level more environmentally friendly) cards (and the GameWorks effects can be VERY pretty), while AMD continues to put out cards that burn with the heat of a million fiery suns and have long-standing, unaddressed issues with their driver support.

      • Sure that's true for PC exclusives but you do have to remember that a lot of games these days target multiple platforms and if you're targeting the major consoles then any nVidia-specific tools/libraries are not an option.
        • by tepples ( 727027 )

          a lot of games these days target multiple platforms

          Do Windows, OS X, GNU/Linux, Android, and iOS count as "multiple platforms"?

          • Well yes, but of course Windows, OS X & GNU/Linux are all, generally speaking, the PC platform as you can very often run 2 or more of those operating systems on the same hardware. This is not true of iOS, Android or the console operating systems.
            • by tepples ( 727027 )

              Windows, OS X & GNU/Linux are all, generally speaking, the PC platform as you can very often run 2 or more of those operating systems on the same hardware.

              For a Windows-exclusive game, each player on OS X or GNU/Linux will need to buy a Windows license. For a Mac-exclusive game, each player on Windows or GNU/Linux will need to buy a Mac.

              But my point is that even though AMD won the eighth console generation, a lot of games don't target consoles because the organizational overhead isn't worth it, especially games from a smaller developer that hasn't already released three pay-to-play games. And if the NVIDIA-specific libraries are also available for the Tegra c

              • I get that you're trying desperately hard to be pedantic but how does any of that affect what I said?
        • Actually, most of the games which have used GameWorks to date have been multi-platform games. For the most part, and with the odd dishonourable exception, the days of lazy, barebones PC ports are behind us. Developers are quite happy to spend time optimising PC versions, including through the use of Nvidia-specific tools/libraries.

          The Witcher 3 was the highest profile example and, if you have the hardware to drive the GameWorks stuff (there is a serious performance cost), then it looks astounding next to th

          • Actually, most of the games which have used GameWorks to date have been multi-platform games.

            Yes but that's only for completely non-essential things, you can't use it for anything you absolutely must have. If AMD provided something akin to GameWorks (TressFX is coming along slowly, and was even used in Tomb Raider) then developers will use that to improve the visual fidelity on AMD GPUs (including consoles).

        • I'm not an expert on these things but I think what happens is that you have a "basic" effect if you don't have the appropiate hardware and a "pretty" one if you have. That's the case for example with Batman Arkham City: If you have an Nvidia card on a PC you get pretty but non-essential-to-gameplay graphical effects and you get simpler effects otherwise (consoles or AMD cards on PC).
          tl dr: The games usually work on several platforms but have extra effects on PCs with Nvidia cards.
          • Yes that's pretty much correct, nVidia provides extra, easy-to-use tools to exploit their hardware so some developers will invest some time in them. But that's obviously only for extra non-essential things.
        • by tepples ( 727027 )

          Trying again based on clarifications provided in your other posts:

          if you're targeting the major consoles then any nVidia-specific tools/libraries are not an option.

          But if a dev isn't yet eligible to target consoles, it's more likely to get suckered into this crap.

          • But if a dev isn't yet eligible to target consoles, it's more likely to get suckered into this crap.

            If a dev isn't eligible to target consoles then they aren't targeting consoles are they? (yes, that is a rhetorical question)

      • by aliquis ( 678370 )

        My impression is that the AMD performance with hair comes from poorer tessellation performance on the AMD cards.

        So that one could likely be solved by just making them better at tessellation.

        Nvidia is of course happy throwing in lots of fur and hair in games if the performance hit is low on Nvidia cards and high on AMD cards leading to worse performance numbers for the AMD Cards in said games and more sales of Nvidia cards.

      • by StormReaver ( 59959 ) on Wednesday January 27, 2016 @08:04AM (#51380735)

        There are no "goodies" or "baddies" here.

        There are now. AMD, through necessity, has chosen the right path. NVidia, through ability, has chosen the wrong path.

        Even now, when AMD cards perform worse than NVidia, I have started choosing AMD for both personal and professional use because of the Open Source AMD drivers. AMD's doubling down on Open Source has validated that decision, and I will likely never buy, nor recommend to my customers, another NVidia card.

        I have completely inverted my recommendations for Linux video. It used to be, "buy NVidia and be done with it," since AMD's driver was a huge pain in the ass to get working on Linux. But Open Source has a powerful appeal to me, having been burned over and over again by proprietary business practices over the decades, and now my recommendation has switched to AMD for the same reason.

        • That's all nice and well from the Linux side of things, but it's the Windows market that makes them money.

          I stopped buying AMD graphics cards because the Windows drivers kept shitting themselves and making my life miserable... consistently over a period of many years. Even with their driver [branding] overhaul, it's going to take a lot of time to rebuild trust and convince me their hardware is worth buying again. That's not NVidia's fault.

      • Actually, I got my R9 390 precisely because of quality. Nvidia had this embarrassing thing where the cards they marketed as being DX12 compatible weren't, in fact, fully compatible with DX12. Some missing functionality had to be done in software, at a noticeable performance cost. That's what made me choose AMD as I want my card to last me a few years, probably well into when DX12 will actually matter.

        And it's not the first time Nvidia released a product that only technically did what it was supposed to do
    • by aliquis ( 678370 )

      No. It's about Nvidia.

    • Amd is trying to do the right thing and keep the market moving into a better standard. Nvidia pays under the table to developers and gives away tons of free stuff so these game companies would use them. Nivida is closed source and how they work with the industry is killing pc vs console. Note that nvidia said they are going to use AMDs new hardware standard yet give nothing back. That's how they roll.... Take all and give us higher prices. Amd does it for money but at least they are trying to do the rig
  • Console games? You mean like "robots" and "nethack", right? 'Cause you run them on the console, rather than in graphics mode?

    • Re: (Score:1, Informative)

      by Anonymous Coward

      XBox, Playstation, etc. You know, the gaming devices that are currently all using custom AIT cards. Oh, wait-- you were just trying to be clever.

    • No no, 'console games' as in "games you want to play when you're finally bored with shooters".

      • by aliquis ( 678370 )

        No no, 'console games' as in "games you want to play when you're finally bored with shooters".

        They make games you will never want to play?!

    • Console games? You mean like "robots" and "nethack", right? 'Cause you run them on the console, rather than in graphics mode?

      Gaming console, not terminal-emulator-in-a-widow console.

      I know you're probably trying to be funny. But some people may find your post confusing.

  • Hey, AMD, show us your new CPUs for 2016. Everything you got now is long in the tooth.
    • Something like this Lenovo IdeaPad Y700 [newegg.com]: AMD A10-8700P, 15.6" (1920x1080), 8 GB RAM + 4 GB Radeon R9 M380
      seemed pretty decent to me, especially when your budget is less than $1500 and preferably $1000.
    • by steveha ( 103154 ) on Tuesday January 26, 2016 @07:21PM (#51377909) Homepage

      Hey, AMD, show us your new CPUs for 2016. Everything you got now is long in the tooth.

      How right you are. But their basic problem has been that they were still stuck on old semiconductor fabrication processes. Intel has spent a bunch of money on fab technology and is about two generations ahead of AMD. It didn't help that their current architecture isn't great.

      I'm not a semiconductor expert, but as I understand it: the thinner the traces on the semiconductor, the higher clock rate can go or the lower the power dissipation can be (those two are tradeoffs). Intel's 4th-generation CPUs were fabbed on 22 nm process, and their current CPUs are fabbed on 14 nm process. AMD has been stuck at 28 nm and is in fact still selling CPUs fabbed on a 32 nm process. It's brutal to try to compete when so far behind. But AMD is just skipping the 22 nm process and going straight to 14 nm. (Intel has 10 nm in the pipeline, planned for 2017 release [wccftech.com], but it should be easier to compete 14 nm vs 10 nm than 32/28 nm vs 14 nm! And it took years for AMD to get to 14 nm, while there are indications [wccftech.com] that they will make the jump to 10 nm more quickly.)

      But AMD is about to catch up. AMD has shown us their new CPU for 2016; its code-name is "Zen" and it will be fabbed on a 14 nm process. AMD claims the new architecture will provide 40% more instructions-per-clock than their current architecture; combined with finally getting onto a modern fab process, the Zen should be competitive with Intel's offerings. (I expect Intel to hold onto the top-performance crown, but I expect AMD will offer better performance per dollar with acceptable thermal envelope.) Wikipedia [wikipedia.org] says it will be released in October 2016.

      http://www.techradar.com/us/news/computing-components/processors/amd-confirms-powerhouse-zen-cpus-will-arrive-for-high-end-pcs-in-2016-1310980 [techradar.com]

      Intel is so far ahead of AMD that it's unlikely that AMD will ever take over the #1 spot, but I am at least hoping that they will hold on to a niche and serve to keep Intel in check.

      The ironic thing is that Intel is currently making the best products, yet still they feel the need to cheat with dirty tricks like the Intel C Compiler's generating bad code [slashdot.org] for CPUs with a non-Intel CPUID. Also I don't like how Intel tries to segment their products [tinkertry.com] into dozens of tiers to maximize money extraction. (Oh, did you want virtualization? This cheaper CPU doesn't offer that; buy this more expensive one. Oh, did you want ECC RAM? Step right up to our most expensive CPUs!)

      Intel has been a very good "corporate citizen" with respect to the Linux kernel, and they make good products; but I try not to buy their products because I hate their bad behavior. I own one laptop with an Intel i7 CPU, but otherwise I'm 100% non-Intel.

      I want to build a new computer and I don't want to wait for Zen so I will be buying an FX-8350 (fabbed on 32 nm process, ugh). But in 18 months or so I look forward to buying new Zen processors and building new computers.

      • I'm not a semiconductor expert, but as I understand it: the thinner the traces on the semiconductor, the higher clock rate can go or the lower the power dissipation can be

        That was true until about 2007. Then we hit the end of Dennard Scaling. Now you get more transistors, but your power consumption per transistor remains quite similar and so you need to be more clever. That's much easier on a CPU than a GPU because CPUs run quite a wide variety of workloads and so there are lots of things you can add that will be a bit win for a subset of workloads but not consume power at other times. On a GPU, it's harder because they tend to saturate a lot more of the execution engine

    • long in the tooth.

      But, but, the untapped potential!

  • w00t! (Score:3, Insightful)

    by zapadnik ( 2965889 ) on Tuesday January 26, 2016 @05:10PM (#51376901)

    As an indie game developer this is fantastic news. I hope Apple also make good use of this, the Apple OpenGL drivers run at about half the speed of AMD's Windows OpenGL drivers on the same hardware (a recent Mac Pro with dual D700s under OS X and Bootcamp-ed to Windows).

    Hopefully it also means that Open Source folks (FSM bless 'em) will also improve the install process for AMD/ATI drivers on Linux, if not the performance.

    This is great news for those working in real time 3D.

    • Ahh the possibilities! Nuff said. Haggling over who's who is for wankers.
    • I hope Apple also make good use of this, the Apple OpenGL drivers run at about half the speed of AMD's Windows OpenGL drivers on the same hardware (a recent Mac Pro with dual D700s under OS X and Bootcamp-ed to Windows).

      Apple have Metal, they aren't interested in OpenGL, hence the reason their drivers are so far behind.

      This is great news for those working in real time 3D.

      It's great if you're targeting a narrow hardware range because you can now optimize specifically for it but we've had this at the vendor level before (S3, 3Dfx, nVidia, ATi, PowerVR, Matrox, 3DLabs, etc.) and it was a nightmare. Even if the application supported your card that didn't mean it supported it particularly well, or as well as other cards. So picking a graphics card was less about how powerful the

      • Thanks for posting. The interesting thing about the Metal framework is that it is a mixed bag of performance compared to OpenGL. In some areas there are performance benefits, but there are also losses.
        http://arstechnica.com/apple/2... [arstechnica.com]

        Given the time required to build and maintain great products (the longer you keep something going, the more money you can make) I personally feel that "portability is THE killer feature". Hence I prefer OpenGL compared to Metal. Developing only for Apple is a losing pro

        • I certainly do think Metal is a poor solution (being platform-specific, proprietary and not as comprehensive as Vulkan - based on knowledge of Mantle), especially when we have Khronos trying to unify the 3D graphics world and the lagging of OpenGL in comparison to Metal and DirectX will be addressed with Vulkan.

          With AMD opening up the GPU it means the OpenGL driver writers can squeeze more performance out, without the application developer having to do the low-level work. This is the model I personally prefer.

          The problem here is the OpenGL driver model, ultimately the real performance gains are around application-specific changes to the behavior of the driver - specifically resource management - so you en

          • The problem here is the OpenGL driver model, ultimately the real performance gains are around application-specific changes to the behavior of the driver - specifically resource management - so you end up with a monolithic driver that has a lot of application-specific code in it. Vulkan alleviates this by putting this in the hands of the application developer rather than having to rely on the driver.

            IMHO this is also a mistake. Having the application developer do driver-level resource management is a fail, IMHO. I like APIs that operate on two levels: one level is the low-level stuff, where you are required to make every decision (Vulkan); then you have another part of the *same* API that has convenience methods for very common things that nearly every application needs (as OpenGL does). Pushing more work to the application developer just so the driver developer can do less is a fail and will result

            • IMHO this is also a mistake. Having the application developer do driver-level resource management is a fail, IMHO.

              Why exactly? Take glUniform calls for example, having the driver create its own buffer to copy that data into and then transfer that to the GPU memory is far less efficient than having the application map to the GPU directly.

              I like APIs that operate on two levels: one level is the low-level stuff, where you are required to make every decision (Vulkan); then you have another part of the *same* API that has convenience methods for very common things that nearly every application needs (as OpenGL does).

              The black box of the driver means that when you allocate memory on the GPU you don't know how much it is using, worse still if it fails and you get an GL_OUT_OF_MEMORY error, which could mean a few different things. Do you have no system or GPU memory left? How much do you need to free

              • Why exactly? Take glUniform calls for example, having the driver create its own buffer to copy that data into and then transfer that to the GPU memory is far less efficient than having the application map to the GPU directly.

                Double-handling is bad. However, the most modern programming languages are multi-threaded and you can't just give application memory to hardware without a rigmarole of pinning memory (to stop a garbage collector from moving memory around during compaction - which is fine with applications but not to give to hardware). And the performance from multi-threading is worth it - hence playing nicely with a application-level garbage collector should be considered in the design of any modern API.

                The black box of the driver means that when you allocate memory on the GPU you don't know how much it is using, worse still if it fails and you get an GL_OUT_OF_MEMORY error, which could mean a few different things. Do you have no system or GPU memory left? How much do you need to free up? Instead of that you allocate the memory rather than telling the driver to do it, you already specify pretty much all the parameters when you do it in OpenGL anyway.

                I have never, ever

                • Double-handling is bad. However, the most modern programming languages are multi-threaded and you can't just give application memory to hardware without a rigmarole of pinning memory (to stop a garbage collector from moving memory around during compaction - which is fine with applications but not to give to hardware).

                  You don't give application memory to hardware, you allocate GPU memory for use in your application. Why do you think you will have problems with Vulkan in terms of multi-threading?

                  I have never, ever seen GL_OUT_OF_MEMORY, and I've been doing this a long time.

                  Oh well it mustn't be a problem then, of course that's only one example. I'm hardly going to enumerate them in the hopes I find one you've hit upon.

                  The current OpenGL drivers are very good at managing memory, far better than most application developers. Pushing driver-level memory management to application developers is a design failure, AFAICS, as I will comment on below.

                  But it isn't driver-level memory management, as I already said it is about having the driver do less and that does not necessarily mean the application developer has to do more. Some m

                  • Ok, I'll provide answers to your questions, if needed, but we have to get over a conceptual hurdle first. Ok?

                    Also, we are both on the same team. We both love 3D, we both like OpenGL and what it stands for (portable high-performance graphics) and want it to succeed. We both want great performance. And most of all, we want other people to also use this excellent technologies. Amirite?

                    Ok, so our (friendly) debate is simply about which has the greater priority: ease-of-use, verses to-the-metal performance.

                    • Ok, so our (friendly) debate is simply about which has the greater priority: ease-of-use, verses to-the-metal performance. Yes? we are simply debating which is more important.

                      Well not really, I'm not really interested in a conceptual debate. If you could have a high performance, easy to use API then sure that would be great, so what I am trying to understand is what your specific grievances are wrt APIs like Vulkan, Mantle, Metal and DX12. Naturally somebody who has done years of OpenGL without understanding what exactly it is abstracting is going to find Vulkan difficult but that is simply because your view of how 3D graphics pipelines work is limited to what you see in the API

                    • Ok it is quite clear you don't know anything about Vulkan hence your inability to give any specifics on what you think is wrong with it. Not a conceptual level, when I say specific I mean specific about the API. You can't do that because you don't know anything about it, you're ranting about your own misinterpretation. So try again to be specific.

                      Sorry, the economics of Vulkan are not favorable for me - because the *design* of the API is horrible

                      Cite the specific design aspects. What parts of the API design are "horrible"? If you know what you're talking about this should be very easy.

                      Obviously what you

                    • Ok I can see you take exception to it being harder than OpenGL and your limited use-cases will not benefit from Vulkan, but that is ok. We won't all be limited just because you are. The key problems with OpenGL are resolved with Vulkan:

                      -The implementation requires that application-specific optimizations be in the driver, this is wrong. Application code belongs in the application, not in the driver.
                      -OpenGL is inherently serial which limits the ability to exploit multi-core CPUs, whether this be unnecessary

                    • Well no, you misunderstand. In fact quite clearly most of your assertions are wrong and you have obviously misinterpretted what I said, you also seem to disagree with people like Graham Sellers, Tim Foley and Cass Everitt, yet are not able to explain why.

                      The fact that you find memory barriers, a basic concept of asynchronous programming - that of course already exists in GLSL [opengl.org] - to be too complex demonstrates that you aren't very experienced despite your attempts at credential-dropping. However since you are

                    • I got thsoe points from Vulkan powerpoint presentations.

                      Link me to the powerpoint presentations that have those points then. I doubt they exist, they are your points and they are derived from your own misinterpretation of the information.

                      I'm merely pointing out that Vulkan is a worse choice than OpenGL for desktop and workstation application developers.

                      If that is how you feel then use one of the various alternative solutions I already outlined for you.

                      I've been developing a lot of stuff in the last two decades. I've used a LOT of complex APIs, but my experience has shown me that a poorly designed API doesn't last long

                      But it isn't poorly designed, your specific criticism is that it has memory barriers - just like GLSL does - so when you say you have all this experience yet you don't understand the basics of asynchronous programming and don't e

                    • Here's a short summary from a Vulkan IMPLEMENTOR on where OpenGL is better than Vulkan, and vice-versa

                      The only criticism in that article is that Vulkan comes with some added complexity, which is precisely why I pointed out that for people like you there is the higher level API built on top of Vulkan. Why is that so difficult for you to understand?

                      Then if you read Graham Seller's own Powerpoint AND have experience in developing LARGE applications you can see massive potential pitfalls in using Vulkan that you won't get with OpenGL

                      And also the massive benefits that you do get with Vulkan, so again, for people like you that can't manage the complexity and for which the benefits of Vulkan aren't an advantage there are less efficient higher level APIs for you to use.

                      Nope, OpenGL is more than sufficient

                      Not for everybody, hence the

                    • Yes yes more "fanboy" namecalling garbage to detract from your inability to express yourself objectively. Post your criticism in the Vulkan forums, then you will see how wrong you are. It will expose both your lack of knowledge and understanding.

                      We can do this in tiny steps, one at a time since large things are clearly overwhelming and enraging you:

                      OpenGL's model means application-specific optimization code must exist in the driver rather than the application, this is wrong. In Vulkan this application code

  • You do not really want to go back to vendor APIs like twenty years ago. It did not save 3DFX then, it may not save AMD GPU division now. You really need to get Vulkan working, and you need to get GNU/Linux drivers performance and numbers of bugs to a reasonable level.
    • by JustNiz ( 692889 )

      I agree with all your points, but it seems that AMD might be trying to do another end-run like they did with AMD64 extensions that are now the industry standard that even Intel copied/now use.

    • by Kjella ( 173770 )

      You do not really want to go back to vendor APIs like twenty years ago. It did not save 3DFX then, it may not save AMD GPU division now. You really need to get Vulkan working, and you need to get GNU/Linux drivers performance and numbers of bugs to a reasonable level.

      It would be very different this time, today the shader (pixel, vertex, geometry, tessellation, compute) is the core of almost all GPU processing. Twenty years ago it was all fixed-function hardware, the first programmable shader support was in DirectX 8.0 back in 2000. Basically you had high level calls and the hardware/driver could implement it however they want, so how one card did it would be totally different from the other. Today they have to run the same shader code efficiently, sure there's still som

  • Not much there is there. A couple of tools only. Hardly worthy of a big announcement.
  • I think the GPU should be open, but you do have to realize that the Gosudarstvennoe Politicheskoe Upravlenie (GPU) hasn't been around since the Soviet Union ended decades ago (in fact, it merged with the NKVD back in the '30s.)

    https://www.marxists.org/archi... [marxists.org]

  • The last thing we want is for the PC to go back to the days when you had to have specific support for the graphics card in your games. PC's had to have the hardware abstracted to allow you to choose whichever card vendor/chip manufacturer you wanted without having to worry about whether it will work in your games. Consoles don't have this problem as they are fixed hardware specs and hence you can code close to the metal. Does abstraction offer the best performance? NOPE, but it is a fuck load better than wo
    • No but you do. This has nothing to do with your detraction and more to do with making computing as a whole better.
    • by Anonymous Coward

      did you live under a rock for a couple of years? nvidia's gameworks has basically already created vendor lock games!

      this GPUopen initiative has the potential to UNDO that damage, not cause more of it.

  • Linux is to the hardware market as BET is to the cable TV market. If you're having trouble selling your product, you can just deal with the free software crowd to up your sales for as long as you can stand it. Sure, it's not nearly as profitable, and yes, they will occasionally whine about their "rights" and inevitably accuse you of betraying some kind of confidence, but at least in the short-term they'll be so grateful for the recognition that they'll put up with your shit and do a lot of promotional work

    • by jedidiah ( 1196 )

      ...except for the inconvenient fact that the "highly proprietary" Nvidia kit sells very well for Linux and is very well supported. When Linux Journal was still printed on dead trees, they had a number of ads from vendors selling big fat expensive GPU compute boxes. There was usually and ad of this sort on the back cover.

      System 76 sells a box like this. Include all of the bells and whistles (including 4 GPUs) and it might be more expensive than your car.

      • yeah, i don't see how that contradicts my point. it just means that nvidia's product is so desirable (for whatever reasons) that linux users are willing to deal with the proprietary drivers. AMD, in second place, wants to up their numbers by "opening" their stuff and offloading the work onto the community. there's nothing wrong with that, of course, but that's what it seems like.

        and apropos of nothing, i don't own a car. i have an nvidia card in a linux box though. :)

        • The problem is that AMD has been talking about open GPU drivers for so many years, and yet nvidia is still the more reliable choice on Linux to avoid driver headaches.

  • I wonder if this is a sign that AMD have completely given up trying to compete against nVidia on outright top end performance gpu-for-gpu (which they've never managed to really pull off) and are now refocussing their strategy to compete based on openness instead. That certainly would seem to be striking at the heart of nVidias biggest perceived weakness, and would probably get them a lot of instant converts/new customers at least in the Linux world, where nVidia have traditionally been more dominant at leas

    • by gweihir ( 88907 )

      From actual performance comparisons, it does not look like AMD has given up at all. I also would say that HBM is not a sign of having given up either.

  • Said the gal winning bronze.

  • Every 5 years, AMD makes a big announcement like they're going to open up their technology.
    None of these efforts were maintained, it felt more like "we're giving up on this old architecture, but here are the specs, in case you want to do our job in our place".

    I'll be more impressed when they can commit in the long term.

    • by Anonymous Coward on Tuesday January 26, 2016 @10:11PM (#51378909)

      except they actually have opened up their technology, and very often they've opened up NEW technology. from AMD64, to HT link to their GPU specs (resulting in open source AMD drivers rivaling the propitiatory drivers in speed, and being MILES ahead of nvidia's open source drivers (which don't even have support for ANY 9xx series cards yet)).
      tressFX is going to release 3.0 soon, mantle formed the basis for vulkan and now we get openGPU.

  • How will this work without the patents? The trouble with open sourcing it is that it's almost guaranteed that AMD has stepped on a few nVidia patents. Hell the patent mess is half the reason nVidia never went whole hog on open source.

    Also this does smack of desperation :(... Good luck AMD.
    • by gweihir ( 88907 )

      I expect they will know enough on what Nvidia infringes to give them a quiet but effective warning to not go down that road.

THEGODDESSOFTHENETHASTWISTINGFINGERSANDHERVOICEISLIKEAJAVELININTHENIGHTDUDE

Working...