Intel shares 48 benchmarks: Arc A750 can compete with RTX 3060

Intel can trade blows with Nvidia’s mainstream GPU. Intel has unleashed 48 benchmarks that show its forthcoming Arc A750 GPU should be able to deal with clashes with Nvidia’s RTX 3060 driving contemporary games.

While Intel lowered its expectations for Arc GPUs last month, the company has experimented with its A750 directly against the RTX 3060 thru 42 DirectX 12 labels and six Vulkan games.

The results look favorable for what will likely be Intel’s mainstream GPU behind this year. In addition, Intel has tested the A750 against popular games like Fortnite, Control, and Call of Duty: Warzone instead of the cherry-picked handful of benchmarks the company released last month.

“These are all titles we picked because they’re popular,” explains Intel fellow Tom Petersen in Intel’s benchmark video. “Either reviewer are using them, or they’re high on the Steam survey, or new and exciting. So these are not cherry-picked titles.”

We’ll have to wait for independent benchmarks, but based on Intel’s testing, the A750 looks like it will compete comfortably with Nvidia’s RTX 3060. “You’ll see we’re kinda trading blows with the RTX 3060,” says Petersen. “Sometimes we win, sometimes we lose.” Intel’s performance is, on average, 3 to 5 percent better than Nvidia’s when it wins on titles running at 1080p.

Over on the 1440p side, it looks like Intel wins on more of the benchmarks. On average, it’s a win of about 5 percent across the 42 games. Intel has also tested six Vulkan titles, which seems to be trading blows with the RTX 3060 again.

“We’re mostly winning at 1080p, and mostly at 1440p with Vulkan,” claims Petersen. “On average, I’d say this is more like a 3 to 5 percent, maybe a little bit more towards the 5 percent win on Vulkan.”

Intel has focused only on modern APIs, not older DirectX 11 games. Early testing of Intel’s Arc A770 GPU — a step above the A750 in the Arc lineup — showed a significant performance gap between DirectX 11 and DirectX 12 games. Intel is still working on its Arc GPU drivers; it could be time before the company can improve DirectX 11 performance.

Intel performed these latest benchmarks on identical systems running its Core i9 12900K CPU and 32GB of DDR5 memory. Intel used its engineering driver and Nvidia’s 516.59 drivers for comparisons. Arc GPUs will require 10th Gen or newer Intel processors, or AMD Ryzen 3000 and above CPUs, all with motherboards that support Resizable BAR (or, as AMD brands it, Smart Access Memory). Resizable BAR is an essential requirement for performance on Arc GPUs.

We’re still waiting for Intel to release its Arc A750 GPU later this year, but these latest benchmarks do show it could be ready to compete for the all-important mainstream. Unfortunately, Intel hasn’t announced official specifications or pricing for its Arc A750 yet, but leaked slides put it between $299 and $399.

Intel will need to reach a price point that can compete with Nvidia’s $329 pricing for the RTX 3060, particularly now that GPU stock has dramatically improved and there is the option of AMD’s Radeon RX 6600 XT at $379.

All eyes will now be on Nvidia’s RTX 40-series of GPU plans. Nvidia recently slashed the prices of its high-end RTX 30 series GPUs, and the discounts could indicate an RTX 40-series launch is due in the coming months. Rumors had suggested the RTX 4090 could launch last month, but July came and went without any new GPUs.

If Nvidia’s latest preliminary earnings are anything to go by (a $1 billion-plus drop in gaming revenue), it’s unlikely that the RTX 40-series will be priced low when they launch. On the other hand, Nvidia still likely has plenty of RTX 30-series cards after a drop in crypto demand, so Intel could be well placed to compete later this year if it can get its drivers and pricing in check.

When the GeForce RTX 3060 12GB can come tight to its official $329 asking price, it’s an excellent product for mainstream gamers. But in today’s demand, even with decreased mining performance, that’s dubious, as its performance disembarks right between the RTX 2070 Super and 2060 Super.

Nvidia has counted firmware and driver code to witness Ethereum mining, which should allow a bit. Still, when people are willing to spend extreme scalper pricing on eBay, even for cards like the RTX 2060 and GTX 1660 Super, everything in our GPU benchmarks hierarchy is sold out. As a result, Nvidia is working with partners to get back the previous generation Turing and Pascal cards.

None of that causes this a lousy GPU, but we expect the RTX 3060 to be just as challenging to formulate as any other modern GPU. Eventually, the current Ethereum mining boom will wither away, but it could bring a year or more before we witness the end of chip shortages. That shouldn’t amaze anyone at this point, but if you’ve been hoping for a well-priced gaming PC upgrade, it’s a sad state of affairs.

Creating a mainstream card and embellishing it with all the bells and whistles costs cash. And we think most gamers shopping for an acceptable value are better served by modest designs with good performance. There will undoubtedly be extreme variants of the RTX 3060, and some of them will be priced more increased than the budget RTX 3060 Ti options.

Let’s be obvious: Even the fastest RTX 3060 won’t beat a 3060 Ti in most situations, even with 12GB VRAM. That’s because memory capacity isn’t a massive factor once you go above 8GB. Including more memory bandwidth, thanks to its more comprehensive memory bus, gives the 3060 Ti a significant benefit. In addition, the 3060 Ti has 35% more GPU cores.

The RTX 2060 and 2060 Supershow how much things have changed for the -60 suffix cards between Turing and Ampere. Ampere gives you more shader cores, potentially much more high computational performance, and a slight improvement in memory bandwidth for the 12GB card. It also doubles VRAM capacity (at least until the expected RTX 3060 6GB shows up, though perhaps Nvidia will leave that for the RTX 3050 line) and brags improvements in the RT and Tensor cores and the memory subsystem, all leading to better performance. However, power use remains similar, with a 170W TGP, a decent step down from the RTX 3060 Ti’s 220W TGP.