Well we have to see.Yes, but the way it's looking I'm not even sure if it can play LoL well at the native resolution (3000x2000). If the GPU wasn't designed for it, then why even mention it? He could have left it at "this is for the architect who is building a building right now".
Yes, but the way it's looking I'm not even sure if it can play LoL well at the native resolution (3000x2000). If the GPU wasn't designed for it, then why even mention it? He could have left it at "this is for the architect who is building a building right now".
I'm not convinced it's going to be in line with those benchmarks as I have my doubts that this GPU is going to be the equivalent of a 940M.
And even if it was.....those benchmarks are 1366 x 768. And it is still struggling to get 30fps. Why would someone want to play a game in 2015 at that resolution on a machine that you spent $2,000 on?
It's impossible to know until benchmarks on the actual system, but based on the GPU-Z it looks very close to a 940 slightly downclocked in core speed but much faster memory... Might actually be faster than a 940m in fact due to the GDDR5 memory vs GDDR3.
Not really. A GeForce 980 will do the trick to be honest. Now, if you want max max max settings, you need lots of memory speed and quantity. Actually, in most cases you need either or. It really depends on how the game engine works in loading textures. But you want both if you want great performance across the board. So you want something like a fast GPU with 8GB HBM2 memory. That is memory that is right next to the GPU. AMD has a graphics card with this technology (HBM1), but due to limitation of HBM1, it can only support 4GB.Heck 4k resolution requires about $700 of desktop gaming video card hardware (more like $1500-$2000+ if you really want to perform well) to run moderately and it's about 8.8 million pixels vs the SB 6 million pixels... So you really should set your expectations to what gaming hardware can achieve and what can actually fit into a very small form factor with limited power.
Microsoft's secret sauce: DirectX 12 “Multiadapter” will use the integrated GPU and discrete GPU together.
Microsoft's GPU Secret Sauce: DirectX 12 Multiadapter | Microsoft Surface Forums
Right on with everything. One small point though i believed the specs on a 940M indicate even slower DDR3 not GDDR3 unless they made a typo. either way as soon as you say customized all bets are off until you know how and what. He said it was optimized for running Professional Applications, so some benchmarks might reveal that and others may not. The apparent removal of PhysX may not have been as important to the targeted apps or didn't buy much over what the i7 could do.
I think it will do well with what it was targeted at which was definitely not games but it will run LoL (I'm guessing that must be a gamers inside joke). Like, yeah he's a good baseball player, he can play in the girls' softball league. LoL
Been there, looked at that however sometimes there's artifacts of specs on a chart never implemented, as I never found a 940M with GDDR5 or 40MB/s bandwidth anywhere. All to say, customized is customized and customizing up is much harder than customizing down but who knows what NVIDIA had on the table to play with. When I look at where they ended up and then at the 940M or 950M I think it's a whole lot easier to get to the end state starting from a 950M. That in no way suggests it's end state performance level. Some things don't add up almost as if the GPU-Z detection is flat wrong.The 940M has 2 versions. (I know, it is a mess)
GM107 and GM108.
GM107 is uses PCI-E 16x, while the GM108 uses PCI-E 8x (version 3.0 on both), and they are other variations in the specs.
They both support GDDR5 and DDR3 (not GDDR3).
Have a look at this: GeForce 900 series - Wikipedia, the free encyclopedia (go to 940M on the table).
You will notice that the 940M GM107 model is a slightly reduced 950M
I think the GPU in the Surface Book is a GM108 version of the 940M, because it is on PCIe 3.0 x8, downclocked a bit, with elements of the GM107 model and on GDDR5.
I think you just said what I said it's harder to stretch silicone or add in missing pieces, than it is to disable stuff you don't need. There are certain elements of this mystery GPU that seem to exceed the std 940M. If I was making this Id take a 950M blow off the parts I don't need or had to eliminate to fit the thermal envelope and call it done. Much easier than masking a new part even if I already have all the changes on file in the library. Although perhaps NVIDIA was going to make a 945x model anyway.GPU-Z uses Nvidia driver APIs to offer all it does. (Same for all overclocking software, it is all Nvidia API going through their drivers, beside GPU firmware replacement).
It is easy to go up, as well as down. Except if you have a highest end GPU, then it is hard to go up.
The reason is the way processors are made. They usually produce the high end one, and chips that fail specification/requirement are divided into groups, average down, and then blow up fuses breaking the broken cores (or cores that needs to be broken to match specs) and changing the frequency and voltages to a very stable state, and voila! Lower end GPU. So you can go up by taking a higher end model chip.
Not really. A GeForce 980 will do the trick to be honest. Now, if you want max max max settings, you need lots of memory speed and quantity. Actually, in most cases you need either or. It really depends on how the game engine works in loading textures. But you want both if you want great performance across the board. So you want something like a fast GPU with 8GB HBM2 memory. That is memory that is right next to the GPU. AMD has a graphics card with this technology (HBM1), but due to limitation of HBM1, it can only support 4GB.
All to say, next generation GPUs should allow you to play games at max settings, across the board at 4K. But currently if you are content with 'high' in most games, than GTX 980 should do the trick 980Ti will help in some game due to the increase memory.