Does the RTX 2080 Ti, 2080 & 2070 Need GDDR6 Memory?
Does the RTX 2080 Ti, 2080 & 2070 Need GDDR6 Memory?
2018-10-23
welcome back to harbor unboxed now
normally on the channel we provide
buying advice we try to steer our
viewers towards the best buy whether
that be the best $200 graphics card or
the best to be 450 motherboard we try to
ensure that you're getting the most bang
for your buck today's video though isn't
buying advice it's more of a facets type
test if you will
back when the GTX 1080 was first
released many viewers claimed that it
was probably memory starved of memory
balance starved so I decided to
investigate
despite stellar performance the claim
seemed reasonable enough after all the
previous generation gtx 980ti supported
a 384 bit wide memory bus and when
coupled with 7 gigabits per second gddr5
memory packed a peak bandwidth of 336
gigabytes per second the GT x 1080 was
limited to a bandwidth of 320 gigabytes
per second despite using the fastest
gddr5 x a memory available at the time
and this allowed in video to get away
with using a narrower 256 bit wide
memory bus still given the gtx 1080
sported vastly superior shader power to
that of the gtx 980ti
it seemed reasonable to question how a
reduction in memory bandwidth would
impact performance and this is why many
were suggesting a memory bottleneck
could be an issue anyway long story
short it turns out Nvidia knew exactly
what they were doing here and the GT X
1080 had enough memory bandwidth to
achieve maximum performance and we
worked this out simply by underclocking
the memory while leaving the core
frequency alone still I found that
testing interesting and although it
wasn't a particularly popular video I've
decided to do it again with the new
GeForce RT X series again I'm not
expecting this video to be all that
popular but I think our core audience
will appreciate the testing generally if
I find this stuff interesting you guys
do as well which is nice as we get to
create the content that we enjoy
anyway the focus of my testing has been
on the new flagship model the RT X xx
atti using gigabytes gaming OSI model
but I've also done some testing with
gigabytes RT X 2080 gaming OC and the
MSI RT X 2070 armor OC so having said
all that let's check the results out
first up we have Assassin's Creed
origins stock out of the box the 28th
eti average 60
for FPS at 4k and we see the same 64 FPS
when overclocking the memory to a
transfer speed of 15 a half gigabits per
second under clocked to 13 gigabits per
second we do see a single frame dropped
then two more when going to 12 and a
half gigabits per second and then a
further two frames at 12 gigabits per
second and that's as far as we could
under clock the GDD are 6 memory we find
slightly different scaling results with
battlefield 1 we're also dealing with
much higher frame rates here as well
right away dropping down to just 13 and
1/2 gigabytes per second results in a
single frame lost and this data is based
on a 6 run average so while 1 FPS is
still within the margin of error we
consistently saw a single frame dropped
with the slower memory interestingly as
the memory speed is wound down we don't
see the average frame rate impacted that
heavily but we do see a rather bigger
hit to the frame time performance when
going from the stock 14 gigabits per
second configuration down to 12 gigabits
per second we see a 14% hit for the one
percent load but just a 6% decrease for
the average frame rate meanwhile
overclocking the memory didn't improve
frame time performance but it did
increase the average frame rate by 2 fps
again we see a bigger hit to the 1%
lower result than the average frame rate
went under clocking the memory this time
when testing with far cry 5 that said
the discrepancy between the average
frame rate and 1 percent low isn't as
extreme as we saw previously with
battlefield 1 interestingly overclocking
the memory had a much bigger impact on
the 1% low performance boosting the
minimum frame rate by 3 FPS the memory
scaling performance seen when testing
with strange Brigade is similar to what
we've seen in the previous two tests
performance isn't degraded too heavily
from 14 bits per second to 13 gigabytes
per second but after that performance
falls off a cliff
particularly the frame time results
overclocking the memory was surprisingly
beneficial to both the frame time and
average frame rate performance the last
game are going to look at is shadow of
the Tomb Raider here we see no loss in
performance with the 13 and 1/2 gigabits
per second memory and almost no change
when dropping to 13 gear bits per second
however beyond that performance does
start to fall away quite substantially
overclocking the memory to 50 and a half
gigabits of a second also Nets us a few
extra frames running shut off the Tomb
Raider once again but this time with the
RT X 2080 the
non TI model we see a pretty minor but
still consistent drop-off in performance
with each underclock step the frame time
and average frame rate performance is
impacted fairly evenly that said we only
see an extra frame for an 11% increase
in memory throughput with a frame
dropped for a four percent bandwidth
decrease then I rerun shut off the Tomb
Raider again with the RT X 2070 and we
see no real change performance by
overclocking the memory and then when we
down clock at two thirteen and a half
gigabits per second we also see no
change below that though we do see a
frame dropped every point five gigabits
per second reduction in throughput for
the last test conducted I reinstalled
the r-tx 2082 I decided to run shut off
the Tomb Raider with some core
overclocking testing with 13 and 15 a
half gigabits per second memory
configurations to my surprise despite
the memory bandwidth bottleneck
overclocking the cores to maintain a
clock speed of 2,000 50 megahertz the
2082 I jumped ahead of the stock core
configuration using 15 and a half
gigabits per second memory then with the
memory boosted to 15 a half gigabits per
second with the cores overclocked we saw
a 6 percent increase in frame rate
performance opposed to overclocking them
at all that's it with the cores
overclocked we only see a 5 percent
increase in performance for what is an
almost 20% increase in memory throughput
there we are again seeing a situation
where the improvement for the frame time
result is less significant and this is
primarily where we are seeing the slower
memory limit performance some
interesting results there and as we
expected the new GeForce r-tx series
very much needs the support of GDD are 6
memory the 14 gigabits per second stuff
now for the most part that proved
optimal though we did see a few
instances where the factory overclocked
model from gigabyte their gaming OC card
that did work a bit better with the
fifteen point five gigabits per second
memory we saw some nice boost to the
frame time performance in games such as
far cry 5 and strange Brigade now that
said other titles such as Assassin's
Creed origins and battlefield 1 proved
that the 14 gigabits per second spec is
optimal on average we saw a 13% drop in
frame time performance when going from
the stock 14 gear bits per second memory
down to 12 gigabits per second and that
is in line with the 14% decrease in
throughput
given that gddr5 X memory was never
SPECT higher than 11 gigabits per second
there's just no way in video would have
been able to use that memory with a 352
bit wide memory bus for the RT X xx atti
or a 256 bit wide bus for the 2080 and
2070 in fact had they used 11 gigabits
per second gddr5 X memory the 20 ATT I
would be limited to a memory bandwidth
of 484 gigabytes per second and that's
21% less bandwidth than it actually has
it also be 8% less bandwidth than the 12
gigabits per second configuration that
we tested in this video using 14
gigabits per second GDD r 6 memory the
RT X xx atti has a memory bandwidth of
616 gigabytes per second that's the peak
memory bandwidth now this figure is
worked out by multiplying the memory
clock rate which is 17 hundred and 50
megahertz by the 352 bit wide memory bus
so 1750 x 352 and then divide that
number by 8 which changes the figure
from bit to bite then multiply that
figure by the memory clock multiply
which 4g ddr 6 is 8 and that gives you
the memory bandwidth in megabytes per
second therefore in order for video to
achieve the same memory bandwidth using
11 gigabits per second gddr5 X memory
they would need to make the memory bus
around 30% larger and with the chirring
die already measuring a massive 754
millimeter squared they simply couldn't
afford to waste the silicon real estate
so that's that really like I said this
was just a science type test not really
buying advise of any description I just
found it interesting and I hope you guys
did as well if you did enjoy the video
well be sure to subscribe if you're not
already because we do this kind of
content from time to time and if you
appreciate the web widget of our box
then consider supporting us on patreon
because as I noted earlier we really
just do these types of testing for you
guys I don't generate a whole lot of
attention for the channel I don't expect
you had too many views on this one it's
just something that yeah our core
audience seems to enjoy and if you'd
like to request other tests like this
some sort of for science type test then
consider joining us on patreon because
you will gain access to our private
discord chat and that's generally where
passionate viewers requests these types
of
tests anyway thank you for watching I am
your host Steve and I will see you again
next time
you
We are a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for us to earn fees by linking to Amazon.com and affiliated sites.