What's new
  • Please do not post any links until you have 3 posts as they will automatically be rejected to prevent SPAM. Many words are also blocked due to being used in SPAM Messages. Thanks!

NVIDIA GeForce GTX 670 2GB Review

Status
Not open for further replies.

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
When the GTX 680 and its accompanying Kepler architecture first launched, nearly everyone remarked at how NVIDIA was able to differentiate their newest initiative from previous designs. Instead of pushing the thermal and power consumption envelope, Kepler excelled from a performance per watt standpoint. NVIDIA also shocked the market by undercutting their competition’s pricing structure while delivering a higher performing product. With the GTX 670 we’re about to see all of this happen again but this time at a more palatable cost.

Right about now, many of you are probably wondering about just how affordable the GTX 670 really is since you’ve been waiting to see the Kepler architecture hit lower price points. At $399 it certainly isn’t an inexpensive graphics card but it is still much more affordable than its $499 big brother. This also puts the GTX 670 into the same price point as the GTX 580, a card that’s now been officially discontinued.

As you may have expected, the GeForce GTX 670 is the spiritual successor to NVIDIA’s GTX 570, a wildly popular card that found a home in many gamers’ systems. Much like last time around, this card is designed around a higher end core that’s been cut down for an additional emphasis on affordability. However, a lower cost doesn’t necessarily mean cut rate features since the GTX 670 includes technologies like Adaptive V-Sync, GPU Boost, the ability to drive up to four monitors and TXAA, all of which were introduced with the GTX 680.

NV-GTX-670-18.jpg

In order to create the GTX 670, NVIDIA took the GK104 core found within the GTX 680 and eliminated a single SMX module. Not only will this create a relatively high performance product but it also allows NVIDIA to use cores that didn’t make it past the GTX 680’s stringent binning process. While the core functionality of the new SMX / Kepler architecture remains the same, that single disabled SMX contains 192 CUDA cores, 16 texture units and an all-important Polymorph 2.0 geometry processing engine. As such, the GTX 670 has 1344 processing cores alongside 112 TMUs resulting in an approximate 13% reduction in raw graphics processing power when compared against the GTX 680.

Unlike many of NVIDIA’s previous GTX 570 and GTX 470 cards, the GK104’s memory, cache and ROP hierarchy has gone untouched in its transition into the GTX 670. It still features a 256-bit memory interface spread over a quartet of 64-bit controllers, 32 ROPs and 512KB of quick access L2 cache. This should allow for a reduction in potential memory and secondary processing bottlenecks.

The core used in NVIDIA’s GTX 670 still has 3.54 billion transistors like its bigger brother but since some of those are cut off and inactive, it won’t consume nearly as much power as a fully enabled GK104. We should also mention that the last SMX is laser cut so unlocking won’t be possible, nor would you want to without a significant change to the reference GTX 670’s cooling solution. But we’ll get into that later.

NV-GTX-670-89.jpg

While the GTX 670’s core processing stages have gone under the knife, we can see that NVIDIA’s cutting transposed itself into the clock speed realm as well. Both the Base and Boost clocks have been significantly curtailed, likely to further differentiate the GTX 670’s performance from that higher end products. We did see our sample hitting the 1050MHz mark which means there's TDP overhead to spare but it couldn't hit the 1110MHz seen on the GTX 680. However, once again the memory hasn't been touched in the least and it still retains the ultra high frequency of 6Gbps.

Fewer cores, lower clock speeds and, one would assume, reduced core voltages naturally translate into less power consumption and heat production. With a TDP of just 170W, the GTX 670 only requires a good quality 500W PSU and should need less power than a HD 7950. This bodes well for gamers looking for a quick and easy upgrade without having to purchase additional components.

Speaking of AMD’s HD 7950, the latest round price drops have it sitting at….you guessed it: $399. From a specification standpoint alone it should be evident that the GTX 670 will likely compete against the HD 7970 rather than Tahiti Pro so AMD may be staring down the barrel of another price drop right as their last one takes effect. Ironically, it seems like the tables have been turned since not long ago, the situation was reversed with NVIDIA’s frantically scrambling to cut costs while AMD led in the performance per dollar category. Now the folks at NVIDIA have an efficient second generation DX11 architecture which can easily beat the competition without the need for overly high prices.

Overclocking will also be a big part of the GTX 670's life, particularly when it comes to board partner versions. At launch, we'll see higher clock speeds on cards that go for as little as $10 more than than stock examples. Others like EVGA's Superclocked edition demand a $20 premium but will incorporate higher clock speeds and even changes to the reference heatsink designs. In short, we'll likely see a broad array of GTX 670 cards, some of which may compete directly with a GTX 680. We'll have a review of some custom designs in the coming days so stay tuned.

NVIDIA may want to put the final nail in Tahiti’s coffin but they won’t get too far if their latest graphics card hits the same availability bottlenecks as the GTX 680. However, for the time being at least, it looks like the GTX 670 will have a hard launch with plenty of board partner cards in the channel. Whether or not this will be enough to satisfy demand is anyone’s guess.

NV-GTX-670-19.jpg
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
A Closer Look at the GTX 670

A Closer Look at the GTX 670


NV-GTX-670-8.jpg

The GTX 670’s exterior design holds the typical historical hallmarks of an NVIDIA graphics card. It has a predominantly black heatsink shroud with a few touches of green to ensure everyone knows it is part of the GeForce lineup. In terms of length, the GTX 670 is also quite short at 9 ½” but that’s only half the story with this card, but we’ll leave the answer to that mystery for a bit later in this section. For now, we’ll give you a hint: it has something to do with the oddball location of the dual 6-pin power connectors.

NV-GTX-670-2.jpg
NV-GTX-670-1.jpg

NVIDIA has made a bit of a departure from their typical designs by giving the reference GTX 670 a somewhat aggressive styling with a corrugated heatsink cover but it still retains the typical blower-style fan setup. The card’s side uses the GeForce branding we’ve previously seen on the GTX 680 and GTX 690 but this time around, it isn’t lit.

We should also mention that even though this is NVIDIA’s reference exterior design, very few of their board partners will be using it. Expect the vast majority of launch day boards to come with custom designs.

NV-GTX-670-6.jpg
NV-GTX-670-9.jpg

The GTX 670’s connector layout doesn’t differ from the GTX 680’s by one iota. This card is still compatible with dual and tri SLI but won't run in quad mode. The backplate connectors consist of two dual link DVI outputs, one full sized HDMI 1.4 output and a single DisplayPort. As with the GTX 680, it is compatible with NVIDIA Surround and has the capability to output a fourth signal to an accessory display.


Shock was the only way to describe our emotions when we first saw the GTX 670’s underside. While the heatsink shroud extends its length to 9.5”, the actual PCB itself reaches a mere 6 ¾”, or roughly as long as most entry level cards. The extra plastic section added onto the card houses the fan as well as a small air baffle to direct airflow towards the internal heatsink. Supposedly, board partners will be releasing single slot versions soon after launch.

Some of you are probably wondering why NVIDIA didn’t just use with a centrally-mounted axial fan like the one found on the GTX 560 Ti. Unfortunately, the answer isn’t a simple one. The GTX 670’s core may be efficient but it still produces up to 170W of heat and most users don’t want that being dumped back into their case. In addition, as a matter of perception we all subliminally associate rear exhaust-style setups with higher end products so we’d hazard a guess that NVIDIA wanted that association to remain here.

NV-GTX-670-10.jpg
NV-GTX-670-11.jpg

Even with its heatsink shroud in place, the GTX 670 is the shortest 600-series card NVIDIA has released to date. However, once a card is shorter than a standard ATX motherboard, it really doesn’t make all that much of a difference for most users.
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Under the GTX 670’s Heatsink

Under the GTX 670’s Heatsink


NV-GTX-670-12.jpg

Once the heatsink’s shroud is removed, we’re treated to a view of the PCB and two heatsinks. The shroud itself overhangs the PCB by a few inches, extending the GTX 670’s length but this allows for a blower-style setup that exhausts hot air outside of your case.


The reference GTX 670 is equipped with sixteen memory pin-outs (four per controller) of which eight are populated with Hynix GDDR5 256MB modules. This layout will allow board partners to equip their GTX 670s with up to 4GB of memory. Unlike some other cards, the memory modules are not covered by the heatsink but should nonetheless get a measure of cooling from the fan’s directional airflow.

The primary heatsink on this card is a simple affair that uses a copper contact plate and aluminum fins for dispersing the core’s heat. Unfortunately, as you can see, its slightly bent fins don’t point towards a quality construction and will likely cause an obstacle to air movement. However, this was ONLY apparent on our first card and subsequent reference samples DID NOT exhibit any fin bowing.

NVIDIA has equipped the GTX 670 with a 4+2 phase PWM which uses four phases for the GPU core and two for the GDDR5 memory. Its VRM modules are equipped with their own aluminum heat spreader.

NV-GTX-670-17.jpg

With its heatsink removed, the GTX 670 actually looks like a lower end, entry level HTPC card rather than a product that can easily beat the HD 7950. However, most board partners will be using their own upgraded PCBs which will typically be the same length as the one found on the GTX 680.
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
The SMX: Kepler’s Building Block

The SMX: Kepler’s Building Block


GTX-680-116.jpg

Much like Fermi, Kepler uses a modular architecture which is structured into dedicated, self contained compute / graphics units called Streaming Multiprocessors or in this case Extreme Streaming Multiprocessors. While the basic design and implementation principles may be the same as the previous generation (other than doubling up the parallel threading capacity that is), several changes have been built into this version that help it further maximize performance and consume less power than its predecessor.

Due to die space limitations on the 40nm manufacturing process, the Fermi architecture had to cope with less CUDA cores but NVIDIA offset this shortcoming by running these cores at a higher speed than the rest of processing stages. The result was a 1:2 graphics to core clock ratio that led to excellent performance but unfortunately high power consumption numbers.

As we already mentioned, the inherent efficiencies of TSMC’s 28nm manufacturing process has allowed Kepler’s SMX to take a different path by offering six times the number of processors but running their clocks at a 1:1 ratio with the rest of the core. So essentially we are left with core components that run at slower speeds but in this case sheer volume makes up for and indeed surpasses any limitation. In theory this should lead to an increase in raw processing power for graphics intensive workloads and higher performance per watt even though the CUDA cores’ basic functionality and throughput hasn’t changed.

Each SMX holds 192 CUDA cores along with 32 load / store units which allows for a total of 32 threads per clock to be processed. Alongside these core blocks are the Warp Schedulers along with the associated dispatch units which process 64 concurrent threads (called Warps) to the cores while the primary register file currently sits at 65,536 x 32-bit. All of these numbers have been increased twofold over the previous generation to avoid causing bottlenecks now that each SMX’s CUDA core count is so high.

GTX-680-117.jpg

NVIDIA’s ubiquitous PolyMorph geometry engine has gone through a redesign as well. Each engine still contains five stages from Vertex Fetch to the Stream Output which process data from the SMX they are associated with. The data then gets output to the Raster Engine within each Graphics Processing Cluster. In order to further speed up operations, data is dynamically load balanced and goes from one of eight PolyMorph engines to another through the on-die caching infrastructure for increased communication speed.

The difference main difference between the current and past generation PolyMorph engines boils down to data stream efficiency. The new “2.0” version in the Kepler core boasts primitive rates that are two times higher and along with other improvements throughout the architecture offers a fourfold increase in tessellation performance over the Fermi-based cores.

GTX-680-118.jpg

The SMX plays host to a dedicated caching network which runs parallel to the primary core stages in order to help store draw calls so they are not passed off through the card’s memory controllers, taking up valuable storage space. Not only does this help with geometry processing efficiency but GPGPU performance can also be drastically increased provided an API can take full advantage of the caching hierarchy.

As with Fermi, each one of Kepler’s SMX blocks has 64KB of shared, programmable on-chip memory that can be configured in one of three ways. It can either be laid out as 48 KB of shared memory with 16 KB of L1 cache, or as 16 KB of Shared memory with 48 KB of L1 cache. Kepler adds another 32/32 mode which balances out the configuration for situations where the core may be processing graphics in parallel with compute tasks. This L1 cache is supposed to help with access to the on-die L2 cache as well as streamlining functions like stack operations and global loads / stores. However, in total, the GK104 has less SMXs than Fermi which results in significantly less on-die memory. This could negatively impact compute performance in some instances.

Even though there haven’t been any fundamentally changes in the way textures are handled across the Kepler architecture, each SMX receives a huge influx of texture units to 16 up from Fermi’s four. Hopefully this will help in certain texture heavy rendering situations, particularly in DX11 environments.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
GPU Boost; Dynamic Clocking Comes to Graphics Cards

GPU Boost; Dynamic Clocking Comes to Graphics Cards


Turbo Boost was first introduced into Intel’s CPUs years ago and through a successive number of revisions, it has become the de facto standard for situation dependent processing performance. In layman’s terms Turbo Boost allows Intel’s processors to dynamically fluctuate their clock speeds based upon operational conditions, power targets and the demands of certain programs. For example, if a program only demanded a pair of a CPU’s six cores the monitoring algorithms would increase the clock speeds of the two utilized cores while the others would sit idle. This sets the stage for NVIDIA’s new feature called GPU Boost.

GTX-680-110.gif

Before we go on, let’s explain one of the most important factors in determining how high a modern high end graphics card can clock: a power target. Typically, vendors like AMD and NVIDIA set this in such a way that ensures an ASIC doesn’t overshoot a given TDP value, putting undue stress upon its included components. Without this, board partners would have one hell of a time designing their cards so they wouldn’t overheat, pull too much power from the PWM or overload a PSU’s rails.

While every game typically strives to take advantage of as many GPU resources as possible, many don’t fully utilize every element of a given architecture. As such, some processing stages may sit idle while others are left to do the majority of rendering, post processing and other tasks. As in our Intel Turbo boost example this situation results in lower heat production, reduced power consumption and will ultimately cause the GPU core to fall well short of its predetermined power (or TDP) target.

In order to take advantage of this NVIDIA has set their “base clock” –or reference clock- in line with a worst case scenario which allows for a significant amount of overhead in typical games. This is where the so-called GPU Boost gets worked into the equation. Through a combination of software and hardware monitoring GPU Boost fluctuates clock speeds in an effort to run as close as possible to the GK104’s TDP of 195W and the GTX 670's TDP of 170W. When gaming, this monitoring algorithm will typically result in a core speed that is higher than the stated base clock.

GTX-680-111.gif

Unfortunately, things do get a bit complicated since we are now talking about two clock speeds, one of which may vary from one application to another. The “Base Clock” is the minimum speed at which the core is guaranteed to run, regardless of the application being used. Granted, there may be some power viruses out there which will push the card beyond even these limits but the lion’s share of games and even most synthetic applications will have no issue running at or above the Base Clock.

The “Boost Clock” meanwhile is the typical speed at which the core will run in non-TDP limited applications. As you can imagine, depending on the core’s operational proximity to the power target this value will surely fluctuate to higher and lower levels. However, NVIDIA likens the Boost Clock rating to a happy medium that nearly every game will achieve, at a minumum. For those of you wondering, both the Base Clock and the Boost Clock will be advertised on all Kepler-based cards and on the GTX 680 the values are 1006MHz and 1058MHz respectively. The GTX 670 meanwhile runs at 915MHz / 980MHz.

GPU Boost differs from AMD’s PowerTune in a number of ways. While AMD sets their base clock off of a typical in-game TPD scenario and throttles performance if an application exceeds these predetermined limits, NVIDIA has taken a more conservative approach to clock speeds. Their base clock is the minimum level at which their architecture will run under the worst case conditions and this allows for a clock speed increase in most games rather than throttling.

In order to better give you an idea of how GPU Boost operates, we logged clock speeds and Power Use in Dirt 3 and 3DMark11 using EVGA’s new Precision X utility.

GTX-680-121.gif

GTX-680-122.gif

Example w/GTX 680

In both of the situations above the clock speeds tend to fluctuate as the core moves closer to and further away from its maximum power limit. Since the reaction time of the GPU Boost algorithm is about 100ms, there are situations when clock speeds don’t line up with power use, causing a minor peak or valley but for the most part both run in perfect harmony. This is most evident in the 3DMark11 tests where we see the GK104’s ability to run slightly above the base clock in a GPU intensive test and then boost up to even higher levels in the Combined Test which doesn’t stress the architecture nearly as much.

GTX-680-93.jpg

Example w/GTX 680

According to NVIDIA, lower temperatures could promote higher GPU Boost clocks but even by increasing our sample’s fan speed to 100%, we couldn’t achieve higher Boost speeds. We’re guessing that high end forms of water cooling would be needed to give this feature more headroom and according to some board partners, benefits could be seen once temperatures hit below 70 degrees Celcius. However, the default GPU Boost / Power offset NVIDIA built into their core seems to leave more than enough wiggle room to ensure that all reference-based cards should behave in the same manner.

There may be a bit of variance from the highest to the lowest leakage parts but the resulting dropoff in Boost clocks will never be noticeable in-game. This is why the boost clock is so conservative; it strives to stay as close as possible to a given point so power consumption shouldn’t fluctuate wildly from one application to another. But will this cause performance differences from one reference card to another? Absolutely not unless they are running at abnormally hot or very cool temperatures.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Introducing TXAA

Introducing TXAA


As with every other new graphics’ architecture that has launched in the last few years, NVIDIA will be launching a few new features alongside Kepler. In order to improve image quality in a wide variety of scenarios, FXAA has been added as an option to NVIDIA’s control panel, making it applicable to every game. For those of you who haven’t used it, FXAA is a form of post processing anti aliasing which offers image quality that’s comparable to MSAA but at a fraction of the performance cost.

GTX-680-103.jpg

Another item that has been added is a new anti aliasing component called TXAA. TXAA uses hardware multisampling alongside a custom software-based resolve AA filter for a sense of smoothness and adds an optional temporal component for even higher in game image quality.

According to NVIDIA, their TXAA 1 mode offers comparable performance to 2xMSAA but results in much higher edge quality than 8xMSAA. TXAA2 meanwhile steps things up to the next level by offering image enhancements that can’t be equaled by MSAA but once again the performance impact is negligible when compared against higher levels of multisampling.

GTX-680-113.gif

From the demos we were shown, TXAA has the ability to significantly decrease the aliasing in scenes Indeed, it looks like the developer market is trying to move away from inefficient implementations of multi sample anti aliasing and have instead started gravitating towards more higher performance alternatives like MLAA, FXAA and now possibly TXAA.

There is however one catch: TXAA cannot be enabled in NVIDIA’s control panel. Instead, game engines have to support it and developers will be implementing it within the in-game options. Presently there isn’t a single title on the market that supports TXAA but that should change over the next 12 months. Once available, it will be backwards compatible with the GTX 400 and GTX 500 series GPUs as well.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
NVIDIA Surround Improvements

NVIDIA Surround Improvements


When it was first released, many thought of NVIDIA’s Surround multi monitor technology as nothing more than way to copy AMD’s competing Eyefinity. Since then it has become much more with NVIDIA rolling out near seamless support for the stereoscopic 3D Vision Surround while gradually improving performance and compatibility with constant driver updates. The one thing they were missing was the ability to run more than two monitors off of a single core graphics card. Well, Kepler is about to change that.

GTX-680-104.jpg

By thoroughly revising their display engine so it has the ability to output signals to four monitors simultaneously. This means a trio of monitors can be used alongside a fourth “accessory” display. This will allow you to game in Surround on the three primary screens while the fourth screen can act as a location for email, instant messaging and anything else you may want to keep track of. There are some Windows-related limitations when running 3DVision since the fourth panel won’t be able to display a 2D image in parallel with stereoscopic content but running your game in windowed mode should alleviate this issue.

NVIDIA has also included a simple yet handy feature to Surround: minimizing the Windows taskbar to the center panel. This means all of your core functionality can stay confined in one area without having to move the cursor across three monitors to interact with some items. Unfortunately, for the time being there isn’t any way to move the taskbar but NVIDIA may implement this as an option in a later release and the ability to span the taskbar across all monitors is still available.

GTX-680-105.jpg

Bezel correction is an integral part of the Surround experience since it offers continual, break-free images from one screen to the next. However it does tend to hide portions of the image as it compensates for the bezel’s thickness, sometimes leading to in game menus getting cut off. The new Bezel Peaking feature allows gamers to temporarily disable bezel correction by pressing CTRL+ALT+B in order to see and interact with anything being hid. The corrective measures can be enabled again without exiting the application.

GTX-680-106.jpg

One major complaint from gamers that use surround is the wide array of unused and sometimes unusable resolutions that Windows displays in games. NVIDIA has avoided this by adding a Custom Resolutions option into their control panel so the user can select only the resolutions they want to be displayed in games.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Test System & Setup / Benchmark Sequences

Main Test System

Processor: Intel i7 3930K @ 4.5GHz
Memory: Corsair Vengeance 32GB @ 1866MHz
Motherboard: ASUS P9X79 WS
Cooling: Corsair H80
SSD: 2x Corsair Performance Pro 256GB
Power Supply: Corsair AX1200
Monitor: Samsung 305T / 3x Acer 235Hz
OS: Windows 7 Ultimate N x64 SP1


Acoustical Test System

Processor: Intel 2600K @ stock
Memory: G.Skill Ripjaws 8GB 1600MHz
Motherboard: ASUS P8Z68-V PRO Gen3
Cooling: Thermalright TRUE Passive
SSD: Corsair Performance Pro 256GB
Power Supply: Seasonic X-Series Gold 800W


Drivers:
NVIDIA 301.34 Beta (GTX 670, GTX 680 & GTX 690)
AMD 12.4 WHQL + CAP 12.3
NVIDIA 295.73 WHQL (GTX 580)

Application Benchmark Information:
Note: In all instances, in-game sequences were used. The videos of the benchmark sequences have been uploaded below.


Battlefield 3

<object width="640" height="480"><param name="movie" value="http://www.youtube.com/v/i6ncTGlBoAw?version=3&hl=en_US"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/i6ncTGlBoAw?version=3&hl=en_US" type="application/x-shockwave-flash" width="640" height="480" allowscriptaccess="always" allowfullscreen="true"></embed></object>​

Crysis 2

<object width="560" height="315"><param name="movie" value="http://www.youtube.com/v/Bc7_IAKmAsQ?version=3&hl=en_US"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/Bc7_IAKmAsQ?version=3&hl=en_US" type="application/x-shockwave-flash" width="560" height="315" allowscriptaccess="always" allowfullscreen="true"></embed></object>​


Deus Ex Human Revolution

<object width="560" height="315"><param name="movie" value="http://www.youtube.com/v/GixMX3nK9l8?version=3&hl=en_US"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/GixMX3nK9l8?version=3&hl=en_US" type="application/x-shockwave-flash" width="560" height="315" allowscriptaccess="always" allowfullscreen="true"></embed></object>​


Dirt 3

<object width="560" height="315"><param name="movie" value="http://www.youtube.com/v/g5FaVwmLzUw?version=3&hl=en_US"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/g5FaVwmLzUw?version=3&hl=en_US" type="application/x-shockwave-flash" width="560" height="315" allowscriptaccess="always" allowfullscreen="true"></embed></object>​


Metro 2033

<object width="480" height="360"><param name="movie" value="http://www.youtube.com/v/8aZA5f8l-9E?version=3&hl=en_US"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/8aZA5f8l-9E?version=3&hl=en_US" type="application/x-shockwave-flash" width="480" height="360" allowscriptaccess="always" allowfullscreen="true"></embed></object>​


Shogun 2: Total War

<object width="560" height="315"><param name="movie" value="http://www.youtube.com/v/oDp29bJPCBQ?version=3&hl=en_US"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/oDp29bJPCBQ?version=3&hl=en_US" type="application/x-shockwave-flash" width="560" height="315" allowscriptaccess="always" allowfullscreen="true"></embed></object>​


Witcher 2 v2.0

<object width="560" height="315"><param name="movie" value="http://www.youtube.com/v/tyCIuFtlSJU?version=3&hl=en_US"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/tyCIuFtlSJU?version=3&hl=en_US" type="application/x-shockwave-flash" width="560" height="315" allowscriptaccess="always" allowfullscreen="true"></embed></object>​


*Notes:

- All games tested have been patched to their latest version

- The OS has had all the latest hotfixes and updates installed

- All scores you see are the averages after 3 benchmark runs

All IQ settings were adjusted in-game and all GPU control panels were set to use application settings
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
3DMark 11 (DX11)

3DMark 11 (DX11)


3DMark 11 is the latest in a long line of synthetic benchmarking programs from the Futuremark Corporation. This is their first foray into the DX11 rendering field and the result is a program that incorporates all of the latest techniques into a stunning display of imagery. Tessellation, depth of field, HDR, OpenCL physics and many others are on display here. In the benchmarks below we have included the results (at default settings) for both the Performance and Extreme presets.


Performance Preset

NV-GTX-670-30.jpg


Extreme Preset

NV-GTX-670-31.jpg
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Batman: Arkham City (DX11)

Batman: Arkham City (DX11)


Batman: Arkham City is a great looking game when all of its detail levels are maxed out but it also takes a fearsome toll on your system. In this benchmark we use a simple walkthrough that displays several in game elements. The built-in benchmark was avoided like the plague simply because the results it generates do not accurately reflect in-game performance.

1920 x 1200

NV-GTX-670-32.jpg


NV-GTX-670-33.jpg


2560 x 1600

NV-GTX-670-34.jpg


NV-GTX-670-35.jpg
 
Status
Not open for further replies.
Top