NVIDIA GeForce GTX 1070 vs GTX 1080 SLI, a comparison of three generations of SLI bridges / Video cards

Gaming systems with multiple GPUs are going through hard times. Every year, when new-generation video cards appear, we perform tests in SLI and CrossFire, and we have to admit that modern rendering engines for delayed rendering are not as well optimized for work with Alternate Frame Rendering as their predecessors in the early years the existence of SLI and CrossFire technologies. This is reflected in the policy of companies – manufacturers of GPU. For example, NVIDIA officially stopped supporting configurations with three or more graphics processors in the GeForce 10 generation and offers single-card graphics cards under the TITAN brand as an alternative to two GPUs in SLI, providing more predictable performance.

 

On the other hand, both GPU manufacturers and game software developers have taken steps to remove other kinds of obstacles that limit the performance scaling in systems with multiple GPUs. AMD has long abandoned a separate CrossFire interface, and NVIDIA introduced a redesigned SLI bus in processors based on Pascal architecture.

We already tested SLI on the basis of two GeForce GTX 1080 video cards, but the tandem GTX 1070 is of no less interest, so we decided to conduct a more detailed study, including not only the GTX 1080 , but also the GTX 1070 in a dual-processor configuration.

The second-oldest model in the NVIDIA game line is much cheaper ($ 379/449 – recommended price for partner video cards and Founders Edition respectively) compared to the flagship ($ 599/699). In addition, we expect that the GTX 1070 will demonstrate the best scalability in SLI due to the fact that the GTX 1080 (and the difference between the 70th and 80th models in the Pascal line is much larger than in previous generations) in tandem is probably more dependent on speed of the CPU. As we know, with regard to single-threaded performance (and the games have not yet learned how to use more than four threads), Intel’s CPUs remain unsurpassed in desktop PCs, but they are progressing rather slowly. By the way, last year we already dealt with the issue of processor-dependent games on platforms [Intel] and AMD .

In addition, NVIDIA’s arsenal already has a new TITAN X based on the Pascal architecture processor, a comparison with which for SLI-systems is inevitable. In theory, it has a sufficiently high computing power to compete at least with the GeForce GTX 1070 in tandem. Finally, as NVIDIA has implemented a new SLI interface with increased bandwidth, it is important to check what the real advantage of the new bridge bridges is over the LED bridges and the simple old jumpers (rigid or flexible) that exist since the introduction of SLI from NVIDIA and come with motherboards.

# Updated interface of the SLI

While AMD in several GPU configurations switched to PCI Express bus synchronization, NVIDIA still uses a separate interface in SLI. However, the attention of the public was escaped by the fact that at sufficiently high resolutions of the GPU screen, NVIDIA also share a piece of data via PCI Express. This suggests that in a form that was implemented in previous NVIDIA architectures, SLI has already exhausted its bandwidth limit. As far as we know, it is 1 GB / s, which is not enough to exchange frames in the resolution of 3840 × 2160 with a frequency of 60 Hz.

But instead of completely switching to PCI Express, Pascal redesigned the existing interface. Traditionally, the NVIDIA graphics card has two SLI connections that work simultaneously to connect the GPU to its neighbors in a triple or quadruple configuration, but only one channel is used for data transfer in a dual-socket connection. Using two channels in a tandem GPU is the most obvious way to increase performance.

The new bridge, released simultaneously with the Pascal video cards, exists in several versions of different lengths and in addition to the dual interface it has improved physical characteristics for operating at an increased frequency from 400 to 650 MHz. Previously released bridges can also be automatically dispersed provided that they provide a sufficiently high-quality signal. In particular, hard bridges with backlight produced by some video card manufacturers are suitable for this purpose. However, the latter do not have a dual connector for connecting the GPU, so the new corporate bridge remains the only solution recommended for 5K class resolutions and multi-monitor configurations.

By the way, if you are wondering whether it is possible to use a pair of old bridges at the same time instead of a new double one, then we checked – at least with the bridges of the first generation it is impossible.

# GIGABYTE GeForce GTX 1070 Xtreme Gaming: technical specifications, package contents, price

For testing the GeForce GTX 1070 in SLI, we used the GIGABYTE graphics card, which is one of the most advanced GTX 1070 modifications on the market – both overclocked GPU and cooling system design, which should provide stability and silence at such high clock frequencies. Indeed, in the default mode, the GTX 1070 Xtreme Gaming GPU runs at 1670/1873 MHz (base and boost, respectively) – we achieved approximately such results by overclocking the reference sample GTX 1070 to the limit . The OC Mode setting, which is activated in the Xtreme Gaming Engine, increases the frequencies to 1695/1898 MHz. Memory in two modes operates at an effective frequency of 8168 and 8316 MHz.

Here you can confirm the accuracy of the information specified in the description.

Manufacturer NVIDIA GIGABYTE NVIDIA
The model GeForce GTX TITAN X GeForce GTX 1070 GeForce GTX 1070 Xtreme Gaming (GV-N1070XTREME-8GD) GeForce GTX 1080 TITAN X
The graphic processor
The name GM200 GP104 GP104 GP104 GP102
Microarchitecture Maxwell Pascal Pascal Pascal Pascal
Technical process, nm 28 nm 16 nm FinFET 16 nm FinFET 16 nm FinFET 16 nm FinFET
The number of transistors, million 8,000 7,200 7,200 7,200 12,000
Clock frequency, MHz: Base Clock / Boost Clock 1,000 / 1,089 1,506 / 1 683 1 695/1 898 1,607 / 1 733 1 417/1531
The number of shader ALUs 3,072 1,920 1,920 2,560 3,584
The number of blocks for imposing textures 192 120 120 160 224
The number of ROP 96 64 64 64 96
Operative memory
The width of the tire, bit 384 256 256 256 384
The type of the microcircuits GDDR5 SDRAM GDDR5 SDRAM GDDR5 SDRAM GDDR5X SDRAM GDDR5X SDRAM
Clock frequency, MHz (throughput per contact, Mbps) 1,753 (7,012) 2000 (8000) 2079 (8316) 1,250 (10,000) 1,250 (10,000)
Volume, Mbyte 12 288 8,192 8,192 8,192 12 288
The input / output bus PCI Express 3.0 x16 PCI Express 3.0 x16 PCI Express 3.0 x16 PCI Express 3.0 x16 PCI Express 3.0 x16
Productivity
Peak performance of FP32, GFLOPS (based on the maximum specified frequency) 6,691 6,463 7 288 8,873 10,974
Performance of the FP32 / FP64 1/32 1/32 1/32 1/32 1/32
Throughput of operative memory, Gbytes / s 336 256 266 320 480
The conclusion of the image
Image output interfaces DL DVI-I, DisplayPort 1.2, HDMI 1.4a DL DVI-D, DisplayPort 1.3 / 1.4, HDMI 2.0b DL DVI-D, DisplayPort 1.3 / 1.4, HDMI 2.0b DL DVI-D, DisplayPort 1.3 / 1.4, HDMI 2.0b DL DVI-D, DisplayPort 1.3 / 1.4, HDMI 2.0b
TDP, W 250 150 ND 180 250
The suggested retail price at the time of exiting (US, excluding tax), $ 999 379/449 460 (newegg.com, 10/18/2016) 599/699 1,200
Recommended retail price at the time of the release (Russia), rubles 74,900 – / 34 990 38,187 (market.yandex.ru, October 18, 2016) – / 54 990

However, the GIGABYTE board costs much more than even the GeForce GTX 1070 Founders Edition – $ 460 for newegg.com and an average of 38,187 rubles. according to “Yandex.Market” at the time of writing the review.

The delivery package, traditionally scarce for modern gaming graphics cards, includes paper documentation and two souvenirs – a metal sticker and a wristband with the manufacturer’s logo.

# The GIGABYTE GeForce GTX 1070 Xtreme Gaming: the design of the

Compared to other implementations of the GTX 1070, which do not pretend to have such high frequencies, this is a very large video card. The PCB is larger than the mounting plate, and the cooling system occupies two and a half expansion slots.

On the reverse side, the printed circuit board is protected by a metal plate, which at the same time provides additional cooling for the voltage converter, the components of which are on both sides of the PCB.

The GPU cooler is a massive structure of two blocks of fins, pierced by four heat pipes. The tubes themselves are unusual: instead of a wick of a thin wire for the movement of liquid in them, channels channeled in the wall of the tube are used. The developers do not explain how this solution works, but it can be assumed that the internal cavity of the tube is the place where the evaporated refrigerant condenses.

The GPU crystal gives heat to the tubes via a copper insert, the DRAM chips contact the aluminum base of the cooler, and a separate plate is provided for the VRM transistors. The fins of the radiator are curved in a special way, in order to increase their area and optimize the air flow, and also, according to information from the manufacturer, to reduce the noise level.

Three impellers with a diameter of 100 mm are injected into the air by double ball bearings with a complex blade shape. With a slight heating, the fans stop and the GPU is cooled passively. The color of the LED indicator at the end of the casing varies with the speed of rotation.

# GIGABYTE GeForce GTX 1070 Xtreme Gaming: board

The power supply system of the video card includes 13 phases, among which 10 serve the GPU, two – GDDR5X chips and one more – PLL. The voltage on the GPU is controlled by the UPi uP9511P PWM controller – the same as on the Founders Edition board.

Two eight-pin connectors equipped with indicators are used to power the board. LEDs signal not only the absence of current, but also unstable power supply, which, most likely, means the voltage output beyond the established limits.

The memory chips under the Samsung label K4G80325FB-HC25 are designed for an effective frequency of 8 GHz.

The advent of virtual reality helmets, all of which connect to a PC via an HDMI cable, forced many manufacturers to expand the set of corresponding output ports, but GIGABYTE solved the problem in an unusual way. Instead of replacing HDMI with one of the three DisplayPort connectors, the developers put two additional HDMI on the end of the board facing the inside of the PC case. However, simultaneously all seven ports can not work. When the computer starts, the video card determines which connectors are used and activates either two external DisplayPorts or two internal HDMI.

Of the remaining features of the board, we note a coating that protects the components from dust, humidity and corrosion.

If you notice an error – select it with the mouse and press CTRL + ENTER.

Leave a Reply