Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AMD Hardware

AMD Unveils New Family of GPUs: Radeon R5, R7, R9 With BF 4 Preorder Bundle 188

MojoKid writes "AMD has just announced a full suite of new GPUs based on its Graphics Core Next (GCN) architecture. The Radeon R5, R7, and R9 families are the new product lines aimed at mainstream, performance, and high-end gaming, respectively. Specs on the new cards are still limited, but we know that the highest-end R9 290X is a six-billion transistor GPU with more than 300GB/s of memory bandwidth and prominent support for 4K gaming. The R5 series will start at $89, with 1GB of RAM. The R7 260X will hit $139 with 2GB of RAM, the R9 270X and 280X appear to replace the current Radeon 7950 and 7970 with price points at $199 and $299, and 2GB/3GB of RAM, and then the R9 290X, at an unannounced price point and 4GB of RAM. AMD is also offering a limited preorder pack, that offers Battlefield 4 license combined with the graphics cards, which should go on sale in the very near future. Finally, AMD is also debuting a new positional and 3D spatial audio engine in conjunction with GenAudio dubbed 'AstoundSound,' but they're only making it available on the R9 290X, R9 280X, and the R9 270X."
This discussion has been archived. No new comments can be posted.

AMD Unveils New Family of GPUs: Radeon R5, R7, R9 With BF 4 Preorder Bundle

Comments Filter:
  • Mantle API (Score:5, Interesting)

    by LordMyren ( 15499 ) on Wednesday September 25, 2013 @08:51PM (#44955079) Homepage

    Personally I would've gone for a mention of Mantle, the proprietary API they are introducing that sidesteps OpenGL and DirectX. I don't really know what it does yet, haven't found good coverage, but DICE's Battlefield 4 is mentioned as using it, and the description I've read said it enabled a faster rate of calling Draw calls.

    http://www.xbitlabs.com/news/graphics/display/20130924210043_AMD_Unveils_Next_Generation_Radeon_R9_290X_Graphics_Card.html [xbitlabs.com]

  • by Anonymous Coward on Wednesday September 25, 2013 @09:05PM (#44955177)

    AMD has totally ruined the future of Nvidia and Intel in the AAA/console-port gaming space. Working with partners at EA/DICE, AMD has created a 'to-the-metal' API for GPU programming on the PS4, Xbox One, and any PC (Linux or Windows) with AMD's GCN technology. GCN is the AMD architecture in 7000 series cards, 8000 series, and the coming new discrete GPU cards later this year and onwards into the foreseeable future. It is also the GPU design in all future AMD CPUs with integrated graphics.

    GCN is *not* the architecture of any Intel or Nvidia products, neither now or in the future. Nvidia and Intel will be stuck with only openGL or directX versions of games, and these versions will be much slower/feature incomplete compared to 'Mantle' versions ported form the consoles.

    OpenGL and DirectX are OBSOLETE methods of controlling rendering for future AAA games. Both of these APIs/drivers have massive state overheads, and can never be made efficient for the mixed rendering/GPGPU methods required for the new games engines of late 2013 and later.

    While some nerds with a better memory than brainpower will dribble about 'Glide' (the proprietary API from now defunct 3DFX), the correct comparison is x86 vs 68000 (the Motorola CPU design). GCN is actually an ISA (instruction set architecture) like the x86 ISA. Intel and Nvidia are like TI and Motorola at the time of emerging competing 16-bit CPU designs that finally led to the dominance of x64. Nvidia is Motorola. Intel is TI. TI's 16-bit CPU designs were no-hopers. Motorola was widely seen as superior to Intel at the time. When Intel was chosen for the first PC, it was game-over for Motorola.

    Using OpenGL or DirectX to 'program' a modern GPU is like using Fortran to program the CPU. Using 'Mantle' on the other hand is like using 'C'. However, because 'Mantle' closely connects to the GCN 'metal', it is almost impossible to envisage a version of Mantle for competing GPU architectures.

    Of course, ATI customers with 6000 series cards or earlier (or Zacate, Llano, or Richland APUs) are as out-of-luck as Intel and Nvidia GPU users. AMD is only supporting GCN, because older GPU designs from AMD use a different GPU ISA.

    With the rise of Mantle, many console games developers are going to desire that the PC market rapidly changes to AMD only, so the ported games need have only one version- the good one. Other developers, whose games do NOT need strong GPU performance, will wish to use only OpenGL on the PC, for maximum compatibility with games in the ARM space (where OpenGL ES rules).

    Any PC gamer interested in high-performance would be INSANE to buy any Nvidia product from now on. 99%+ of all AAA games will originate on the new consoles released later this year- which are 100% AMD. Most casual gamers might as well choose AMD for maximum future compatibility. Intel was never really in the game. Nvidia, on the other hand, will be cursing themselves for ever starting this war (Nvidia previously paid AAA games developers to cripple AMD performance, and attempted to leverage the proprietary PhysX physics engine).

  • by Guspaz ( 556486 ) on Wednesday September 25, 2013 @09:59PM (#44955485)

    So, you're convinced that the slight improvement in performance brought about by a reduction of software overhead is going to completely cripple nVidia? Yeah, sure.

    Even if Mantle does produce faster performance (and there's no reason to doubt that it will), the advantages will be relatively small, and about all they might cause nVidia to do is adjust their pricing slightly. The won't be anything that you'll be able to accomplish with Mantle that wasn't possible without it, such is the nature of fully programmable graphics processors.

    Game publishers, for their part, will hesitate to ignore the 53% of nVidia owners in favour of the 34% AMD owners. It's highly unlikely that this will cause a repeat of the situation caused by the Radeon 9700, which scooped a big win by essentially having DirectX 9 standardized around it. In that case, ATI managed to capture significant marketshare, but more because nVidia had no competitive products on the market for a year or two after. This time around, both companies have very comparable performance, and minor differences in performance usually just result in price adjustments.

  • Ignore numbers (Score:4, Interesting)

    by rsilvergun ( 571051 ) on Wednesday September 25, 2013 @10:10PM (#44955543)
    just look at the bit rate on the memory bus. Video card manufactures use the mem bus bitrate to limit card performance so that their low end doesn't cannibalize their mid range and high end (ala 3DFX).

    128-bit is low end.

    192-bit is your mid range card.

    256-bit is your high end.

    You don't need to pay attention to anything else until 256 bit. After that just sort by price on newegg and check the release date. Newer is better :)
  • by PhrostyMcByte ( 589271 ) <phrosty@gmail.com> on Wednesday September 25, 2013 @10:25PM (#44955667) Homepage

    The idea is that operating systems introduce a huge amount of overhead in the name of security. Being general purpose, they view their primary role as protecting all the other apps from your unstable app. And, lets face it, even AAA games these days are plagued with issues -- I'm really not sure I want games to have low-level access to my system. Going back to the days of Windows 98's frequent bluescreens isn't on my must-have list of features.

    John Carmack has been complaining about this for years, saying this puts PCs at such a tremendous disadvantage that consoles were able to run circles around PCs when it came to raw draw calls until eventually they simply brute-forced their way past the problem.

    Graphics APIs have largely gone a route that encourages keeping data and processing out of the OS. That's definitely the right call, but there are always things you'll need to touch the CPU for. I'm curious exactly how much of a benefit we'll see in modern games.

  • Re:Ignore numbers (Score:3, Interesting)

    by Anonymous Coward on Wednesday September 25, 2013 @10:58PM (#44955857)

    False. Perhaps this was true in the past, but currently memory bandwidth is tailored to the GPU's processing power - that is, it's the bandwidth the core needs, usually defined by the most bandwidth-hungry scenario.

    Bandwidth is not constrained by bitrate alone, but by bitrate and clockspeed - a 128-bit interface at 2 ghz is just as good as a 256-bit interface at 1 ghz. Usually the wider bus is less power-hungry at the same bandwidth, and is therefore preferred.

    Also, bitrates of 384 and 512 exist.

  • by epyT-R ( 613989 ) on Wednesday September 25, 2013 @11:23PM (#44956021)

    1. today's consoles also run protected mode (or architecture specific equivalent) operating systems too. The userland kernel hardware latencies are present.

    2. You're complaining about games? Today's operating systems are hardly any better off. There is no way the vendors can vouch for the security of 10gb worth of libraries and executables in windows 7 or osx. The same is true for OSS. Best practice is to just assume every application and system you're using is compromised or compromisable and mitigate accordingly.

    3. IIRC that particular carmack commentary was done to hype up the new gen systems. It's largely bogus. I'm sure the latencies between the intel on-die hd5000 gpu and cpu are lower, but that doesn't mean it's going to perform better overall. Same thing goes with the amd fusion chips used in the new consoles. They're powerful for their size and power draw, but they will not outperform current gaming pc rigs..

  • by blahplusplus ( 757119 ) on Thursday September 26, 2013 @02:40AM (#44956965)

    "The difference in performance will be MASSIVE when the rendering features made viable by Mantle are enabled."

    I'm sorry but you are full of shit. Memory bandwidth has been the KEY factor in framerates. Not drawcalls. That drawcall bs is propaganda. Transistors > software (provides software developer isn't braindead). Always. The same way CISC was 'slower' then RISC, and the itanium was supposed to be the death of x86 but we still have X86. They found ways around it and to make it faster. Same deal.

No man is an island if he's on at least one mailing list.

Working...