Gadgetory


All Cool Mind-blowing Gadgets You Love in One Place

HBM vs. GDDR5: Differences Explained

2017-09-12
what on earth is the difference between HBM and gddr5 and for that matter HBM to and GTD are 5x welcome to our crash course playlist so HBM our high bandwidth memory in its essence is a form of stackable memory designed to reduce power consumption and ramp up data transfer rates the first of hbm's coin was implemented in the r9 nano fury in fury x with HP m2 which allows twice the number of transfers as the first generation will get more into detail with that in a second here thrown into the latest vega cards it's a largely AMD venture but should be noted that Nvidia is p100 tesla card also boasts HBM to the primary physical difference between HBM and gddr5 that you need to know is that the latter is scattered around the GPU on the PCB most graphics cards from early Nvidia and AMD up through the 1080 TI are set up this way the primary difference between gddr5 X and gddr5 being the doubled bandwidth made possible in part by a quad data rate interface rather than a double data rate one found in regular old ddr5 every time the data rate is doubled the number of clock signals must double and to achieve 11 gigabits per second without insane noise and jitter levels was no simple task for micron but as taxing is this was in the production phase PCB modifications required for gddr5 X integration or minimal 190 pins are used by single chips versus 170 and the older gen but a parent positioning didn't change so this alteration is nowhere near as costly as HBM integration which we'll talk about a second in a nutshell here gddr5 X utilizing quad data rate signaling should actually be called gqd r5 but you can see why they tacked on the X instead now we're gddr5 modules are physically scattered around the graphics processor HBM dyes are extremely close to the GPU actually on the package itself DRAM dyes are stacked atop a controller to form an HB a module which is connected to the GPU via and imposer between the two 1,024 data links exist assuming 4 stack DRAM dies per module each with 2 128 channels per die every HB in package boasts 8 channels at 128 times 8 equals 1024 bits if the graphics card in question has 4 HB m stacks then the memory bus of the card is 1020 four times four equals 4096 bits this provides cards with HBM and insane memory bandwidth HBM 2 which is found in Vega and the P 100 essentially doubles the pin transfer rate of HP m1 and also allows for much more storage per unit the r9 fury exxon HBM 1 for example was limited to 1 gigabyte per stack which is why it was only a 4 gigabyte card it only had 4 HBM stacks with HB m 2 8 gigabytes is a much simpler feat and in the case of Vega 56 only two stacks exist on the package so let's calculate AMD's Vega 56 memory bandwidth as an example the arbitrary numbers can get kind of confusing total bandwidth what you see on graphics card manufacturer websites like this here is the product of base DRAM clock speed memory bus width and bytes and the number of interfaces jump back a minute or two in this video if you need to review the variables in this equation the equation itself is very simple but the terminology can be quite confusing this step was simplified with great help from Steve over at gamers Nexus by the way I've linked a few of their articles that are tied to this topic at hand in the video description so huge thanks to him and his team over there so as mentioned multiply the 4 values attached to Vegas HBM to clock speed 800 megahertz memory bus width at 2048 bits divided by 8 to get 256 bytes and the number of interfaces at 2 for 2 modules on the package memory bandwidth equals four hundred and ten roughly gigabytes per second as such you can see how changing the clock speed of the memory has a direct impact on memory bandwidth at 945 megahertz for example bandwidth jumps up to roughly 484 gigabytes per second the primary benefits of HP m2 are rooted in power and frequency optimizations since memory pathways are extremely short power consumption is lower than that for gddr5 but that's not the full story since each bus is much wider packets can operate at much lower frequencies and maintain comparable or even higher memory bandwidth than gddr5 counterparts lower frequency equals lower input voltage and there's your efficiency parameter but it's no secret that HBM is a costly venture printed circuit boards are much cheaper by comparison which is why gddr5 has been so feasible for so long and the production of GDD are 6 leaves us questioning the viability of HBM in the first place Ami's often been the guinea pig for new and exciting technologies we'll see you soon though if HBM 2 actually pays off for now with a general consumer need - no is it cards with HBM typically result in smaller form factors in her marginally lower power consumption cards though not the case with Vega both gddr5 and HB and bandwidths are sufficient for gaming you'd be hard-pressed to tell the difference in literally any scenario so the real benefit of HB mla in its small design useful especially for VR applications this topic was brought to you by one of your fellow twitter followers be sure to thank them for that I ran a survey and the remaining three videos in the crash course series or actually maybe minutes science series it really depends on how long the video is end up being will be arriving soon on the channel I'm still working on getting those scripts together because a lot of this stuff I have to do research for which is why at the end of every video I say thanks for learning with us because I don't know most of the stuff going into it I like to learn and I'm glad that you all like to as well that's why this channel exists so if you like this video be sure to give it a thumbs up thumbs down for the opposite click subscribe button if you haven't already and stay tuned for more content like this this is science studio thanks for learning with us
We are a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for us to earn fees by linking to Amazon.com and affiliated sites.