PC Gaming

NVidia 4K Display Limits: Card Bandwidth?

Good luck struck me recently and I was able to get my hands on the brand new Asus ROG Swift PG27UQ display.  This 27″ LCD can do 4K at 144Hz and features NVidia’s GSync technology.  It’s not an inexpensive display at $2000.  Nor is it readily available yet.  I was fortunate enough that my local Microcenter had six of them and I was able to reserve one before they vanished.  While I was at Microcenter, I grabbed two 27″ LG 4K/60Hz displays to go along with it.  These three panels replaced my previous three-panel 1440p setup, with the center one being the original PG278Q.

The details of my gaming rig are listed here, but the important parts are the motherboard and GPUs of course.  I’m running an Asus Rampage VI Extreme board, an Intel 7900X CPU, and two NVidia Titan X Pascal cards in SLI.  According to the Asus BIOS, both cards are running at full x16 speed, which will be important a bit later in this document.

The purpose of this document isn’t to describe gaming in 4K at high refresh rates.  I made a quick YouTube video of my thoughts on it.  Rather: I’m running into an interesting problem with the available refresh rates when I have four 4K displays connected to my gaming rig.  Four 4K displays?  Yes, four of them.  Another entry on this blog has detailed my streaming setup, which involves a second PC with an Elgato 4K60 Pro capture card in it.  When that capture card is connected via HDMI to my gaming rig’s NVidia GPU, it appears to the Windows OS as: a fourth 4K display capable of pushing 60Hz.

Here’s the challenge:  When I have the fast Asus panel and the two LG 4K panels connected, all three displays run at their max refresh.  That means the Asus runs at 144Hz, and the two LGs run at 60Hz.  When I connect the capture card to the HDMI port, the fast Asus panel gets down-clocked to 120Hz in Windows.  And for the record: Windows 10 Pro 64-bit.  This, by the way, before I’ve done any display mirroring or capturing, or anything like that.  All I have to do is connect the capture card up and let Windows see it as a fourth 4K display.  The moment it does that, the Asus is clocked at 120Hz.

Now, the drop of 24Hz isn’t that big a deal.  But it’s a curious thing, and that’s what I’d like to find out:  why?!

As a further point of data: if I disconnect one of the LG 4K displays, the Asus panel gets clocked back up to its max of 144Hz.  So it doesn’t appear to have anything to do with the HDMI port, nor the fact that the fourth display is actually a capture card.  From my perspective, it just appears to be related to the number of 4K displays connected.

Bandwidth?

Roughly speaking, the bandwidth of a 4K/60Hz display is:

3840 pixels/line * 2160 lines/screen * 60 screens/second * 24 bits/pixel = ~12Gbits/sec

The Asus panel running at full speed has a bandwidth of:

3840 pixels/line * 2160 lines/screen * 144 screens/second * 24 bits/pixel = ~29Gbits/sec

What about when the display gets down-clocked a bit?

3840 pixels/line * 2160 lines/screen * 120 screens/second * 24 bits/pixel = ~24Gbits/sec

So if we total all of that up:

3 4K/60 displays + 1 4K/144 display

3 * 12Gbits/sec + 29Gbits/sec = ~65Gbits/sec

Or when the display is actually down-clocked

3 * 12Gbits/sec + 24Gbits/sec = ~60Gbits/sec

That’s an interesting number.  Sixty-four is a power of two, whereas 65 is just a bit higher than that.  Could there be some sort of hard-coded 64Gbit/sec bandwidth limit to the NVidia GP102 chip or the card built around it?  I honestly don’t know, and getting that number out of NVidia has been somewhat challenging for me.

PCI-E 3.0 x16

It’s also an interesting number because it’s a bit higher than half of the available bandwidth for a PCI-E 3.0 x16 card.  We know that a 3.0 slot has approximately 985MBytes/sec per lane available to it.  To get that in bits: about 7.8Gbit/sec per lane.  Which means that a x16 card should be able to operate at around 125Gbits/sec.  Remember that I previously confirmed both cards are running at a full x16 speed.  Maybe they’re telling the BIOS one thing, and operating another way?  Half of that number is just over 60Gbits/sec, which would fall in line with the limit I’m hitting.  Maybe the cards are actually running at x8?

It’s Not The …

When this panel was first introduced, the “experts” came out of the woodwork to tell us that it was limited due to not being able to handle 144Hz in true RGB.  That’s true, it’s not.  But that’s not the problem.  Not in the least.  I can set all of the displays, including the capture card, to YCbCr 4:2:2, and that has literally no effect.  So if you’re reading this and thinking to yourself, “It’s the col0rz d000000d!”  It’s not.  At least not so far as I can determine using the tools available to me.  It literally feels like an overall bandwidth or throughput problem.

However,  that said: I don’t know for sure!  I’m not pointing fingers at NVidia or any such thing.  I’m just curious and really want to know why this is!

1 thought on “NVidia 4K Display Limits: Card Bandwidth?

Leave a Reply