Radeon 7 - A Day Later - What We Know
Or what we dont
I spend a good bit of time reading tech related news around the web. A lot of that time is spent reading through what people have to say. How they feel about a product or company. What they want to see come next. Predictions of hardware coming in the future. One thing in particular really sticks out to me, how vocal the AMD community has become. AMD was in a rough spot for a long time, with lackluster reception and performance from their Bulldozer+ family CPUs, and the GPU segment that just seemed to keep falling further and further behind. Pair that with Nvidias very agressive approach to marketing and it seemed like AMD was fading away.
That was until Dr Lisa Su took over. Since being appointed CEO of AMD in 2014, she has been working tirelessly with her team of staff to restructure AMD and reposition themselves within the market. This includes a very different approach in marketing. Without going into an endless ramble, lets just say AMD is winning back mind share, with this gaining a very vocal following. How times have changed.
AMD's CES event was anticipated highly across social media and tech reporting sites. All eyes were on AMD, being their first ever keynote and a strong lead up to this event. During AMD's last big conference they announced they were the first to 7nm for a GPU, and rumors were running rampant with Ryzen 3000.
Many things during the keynote were discussed. From Desktop to Server and Mobile, AMD gave us tastes of what is to come in 2019 and beyond. For this article what we're going to focus on is Vega Radeon VII (7).
Radeon VII is the newly announced high end gaming GPU coming to store shelves February 7th, 2019. Its coming in swinging with 3840 cores, 1800mhz peak clock speed and a staggering 16GB of HBM2, at a reasonable cost of $699 sep (I have to note SEP here after the debacle that happened last time). There is more to it than simply specifications and price though, so lets dive into it.
You'll probably notice in that chart that there is some incomplete data. Thats because at this time, there is still a lot of information left on the table waiting for AMD to push out. There has actually been very little in the way of details about the changes going from Vega 64 to Vega 20, outside of professional use cases (new AI inference operations, double precision, etc). Even something as rudimentary as ROPs (Raster Operation Pipeline/ Raster Operations Unit) is left for us to assume or guess.
This takes us to a point of discussion, how many ROPs are there really on Vega. The reason this is a question is because Anandtech published their announcement citing 128 ROPs. This caught me by surprise because of everything that we know up to this point about GCN and Vegas configuration points to 16 raster engines (which do 4 ops pixel ops per clock, making them effectively 64). If we are to look at GCNs construction, you'll notice that the majority of the heavy lifting is done in whats called a 'Shader/Shading Engine'. Inside of these you have your compute units, your geometry pipeline, your DSBR and finally your pixel engines (otherwise known as ROPs). Those ROPs are connected from the Steaming Engine to the L2 Cache, and from there to the memory subsystem. The ROPs are not directly connected to the memory controllers at all. I say this because one publication said the ROPs were connected to the memory controllers and since they doubled, so did these. That is not the case. In a tweet Anandtech stated that during the Mi-60 reveal, AMD had said that the back end had doubled hence the numbers. For clarity, here are a few pictures. The last one being the block diagram from AMD. Notice only 16 ROPs (4 pixels per clock, effectively 64). For now, I'm standing by waiting for a response from AMD on that matter, but they are busy at the moment. All this said, ROPs are not a big determining factor in your performance as it is. 64 as demonstrated by Vega 64 has enough to push 4k just fine, and having double that isn't going to increase your performance outside of maybe in MSAA (which use pixel samples).
(Update #1) - Final specs are under NDA at this time and won't be available until closer to launch
(Update #2) - AMD folded due to the amount of misinformation floating around the internet and it is now confirmed 64 ROPs
Design and Cooling
The design of the Radeon VII lets us know AMD is learning. Instead of the usual blower style fan that you know is going to be screaming under any moderate load while failing to keep the GPU temperatures down to a decent level, we now have the makings of a proper cooler for a flagship card. The exact specifications of the cooler and shroud I do not have, but it appears to be the same size of your typical high end GPUs, while sporting a two slot design. One other thing of note, it appears as if AMD has designed the Radeon VII with 'hot swappable' fans (don't actually attempt to hot swap fans, please turn it off first). I haven't been able to confirm this but it looks as if they took a hint from XFX and their design. Exact performance of this cooler is not known, but it looks promising. This should help keep the temperatures down which should allow the GPU to hold more reasonable clocks under load.
Coming in at $699 just how much more performance does the Radeon VII bring to the table? Well, depending on your task, quite a lot. For most of us, and for the target market of this graphics card, gaming is the primary concern. For this, AMD claims that Radeon VII offers a 29% increase in performance over the Vega 64 at 4k max settings. For those that render at home or edit large 4k videos, AMD is saying that Radeon VII brings 30% over the Vega 64 (though I imagine this lead grows with larger data sets). For this number they cited Davinci Resolve 15, Adobe Premiere, Luxmark and Blender. The most notable score in this list is the Luxmark HDR Ball score, which came in at a whooping 50202. That's faster than a Titan RTX.
This is all first party tests though, not that I doubt AMDs numbers, its just their job to sell a product and they do that by presenting their product in the best light. We will wait for our review before casting a final judgement. In the meantime, given AMD's numbers it looks like its going to slot in slightly below the 1080Ti/2080, while trading blows in a few titles. Should make for a fun review. Instead of copying over all of AMD's gaming results, we compiled them together into one easy to read chart.
For Game Creators
Dr Lisa Su brought on stage David Polfeldt who is the head of Massive Entertainment at Ubisoft. David spent some time discussing light details and thanking Dr Su for having him out to discuss Radeon VII. What I found interesting though was a couple comments about using GPUOpen in Division 2, and then 'using the full set of radeon features'. GPUOpen is an open source repository of AMD developed APIs and programs that range from profiling tools to advanced in game effects. Which GPUOpen libraries they are using is anyones guess, but its at least Radeon GPU Profiler, effects libraries, or both. The last part I can only interpret as they are integrating Radeon intrinsic / RPM / some secret tech into the game (he was abruptly cut off). Either way you look at it, AMD has a healthy relationship with Ubisoft, partnering with them on multiple games in the past and at least one more to come, launching March 15, 2019. Everyone attending was given a free copy of Division 2, and AMD announced a bundle package with AMD Ryzen series CPUs.
So the event is over and what do we think? Well, we think we wait. Its impossible to make a call on a product without having a product in hand. There are some interesting thoughts that come to mind though with this announcement. These are my own opinions or points of interest that I look forward to knowing going forward.
First, the final specifications. I know the die is finalized, but there is so much we don't know about it. As a tech junky, this really irks me in some ways, but it also keeps the anticipation alive a bit. What is the mystery behind the ROP count. Was it simply something that Anandtech misspoke on, or is there more to it. Is the L2 cache the same as Vega 64, or did AMD increase it with the die savings they got from 7nm. Lower level caches, did those change? Enabling FP64 compute is going to consume more cache and registry space, how did AMD compensate?
Who is this card really for? AMD is telling you gamers, and I suspect thats ultimately who they want to sell it to. I'm sure they would appreciate it more if pro users bought their Instinct cards (or WX series if those ever announce). But looking back at the semi-pro support with AMD, they've so far done pretty well even on the gamer series side. Sure, there are some applications that get the nerfed performance but overall they support better than Nvidia with their gaming cards. The more glaring element of Radeon VII that makes me wonder is the giant VRAM pool it has at its disposal. Games at this time do not need 16GB. I will tell you though, pro applications do. I do not believe in design that it was AMDs intention to deliver 16GB to gamers, it simply just fell into place like that. They needed a product to sell in 2019, they had Vega 20. So looking at this card, and given historical data on AMD's performance in pro applications, I am guessing that this will be my go to recommendation for semi-pro workloads that this card can service. Obviously if the application at hand is held back by the driver then its time to look elsewhere.
Back to the topic of AMD listening to their fanbase, I think Radeon VII is in part an answer. After the launch of Pascal, AMD fans were forced to wait a year before AMD had anything to answer the mid-high end GTX 1080. And the launch was plagued with issues dealing with pricing, driver support and complaints of performance and efficiency against the 1080. Now, we have AMD answering the 2080 launch in much better step. It hasn't been a year since Turings launch, its not even been six months, and here we have a contender launching at the same price. So in timeliness this is a much better response to Nvidia than with the previous Vega, Kudos to AMD on that front.
Then there is the issue of pricing. Across the web there has been pitchforks and mobs forming over the rise in cost of GPUs concerning the Turing launch. Prices have gone up, as much as 43% for the same tier in performance. Here is no different. Radeon VII offers 29% more performance while costing 40% more money. I cannot stress enough that these rises in price are not just greed driven corporations. They simply cost more to make and there are margins that have to be maintained to stay profitable (or in Nvidias case, to stay in the upper tiers of margins). For Vega, the manufacturing cost of the die alone is roughly the cost of a 662mm die on 14nm (given their price per transistor figure), while incorporating two additional faster HBM stacks, on a larger interposer. This is what AMD can sell this for and still make 'X' margin. The same is true (though not as extreme) with Turing. They are monstrously large dies attached to more expensive memory. Doing some rough math, you can see Nvidia even lost some of their margins on Turing, though I'm sure they are still well within the range of profitable. This is simply the cost of pushing the boundaries.
Where does this leave Navi? I think Navi is in a good position going forward. There is a significant gap in performance between the Radeon VII and the 590, and ultimately that 590 is going to have to be replaced. If Navi implements GDDR6 for its memory and uses a die in the 200mm range, cost should drop significantly, while offering decent performance on a newer architecture. Of course, details on Navi are almost nonexistent so its impossible to speculate how well it will perform or its final cost, but I expect a return to the $300 range (again, speculation, not a leak or rumor).
And finally, how long will Radeon VII last on the market. Already Radeon VII is launching into a market where its technically behind the times. At the time of its launch, DXR is an active standard, one Radeon VII cannot run. This is a stark contrast to the DX12 / Vulkan launch where AMD was ahead of the ball. By the time that the Radeon VII launches, we will have two games that support DXR / RTX (thats if Resident Evil 2 is still going to support it, which I haven't heard anything since the original mention). Shortly after that we will be getting Metro Exodus, another title that is going to support DXR for Global Illumination. With more titles coming this year, momentum is picking up for DXR titles, or so it seems. Will AMD enable DXR on their current GPUs in order to stay in the game, although there is no accelerating hardware (accelerators are not required for support)? I think its a possibility, and we've already seen it run through the compatibility layer. Will AMD hurry to launch Navi which may have DXR support? Or, will DXR simply flop and we'll move past it until future gen cards can run it at appropriate frame rates.
Is AMD back in the game? I think this launch is at least a showing that they aren't backing out, and they aren't willing to give up ground easily. While the verdict is still out on how well Radeon VII performs, it is a showing to their fanbase that they aren't leaving them empty handed with only Nvidia to fulfill their hardware needs. At this time, I believe AMD is giving us a viable option as an alternative to the Turing series if DXR and DLSS aren't up our alleys.
Thats it for this one guys. I swear, I tried to keep it short (it didn't work). I hope this was informative or insightful. If you have any questions, comments or grips, let me know in the comments and I'll get back to them as soon as I can. So long, and PC Better.