Pages: [1]
|
 |
|
Author
|
Topic: Circling the drain.Tim Sweeney says that the GPU is dead. (Read 5135 times)
|
IainC
Developers
Posts: 6538
Wargaming.net
|
At the recent HPG conference, Tim Sweeney of Epic Games predicted the end of the dedicated GPU and a return to software rendering as processors get faster and have more cores. This is apparently the future of graphics hardware. 
|
|
|
|
|
IainC
Developers
Posts: 6538
Wargaming.net
|
Found a link to the keynote itself - PDF link.
|
|
|
|
NiX
Wiki Admin
Posts: 7770
Locomotive Pandamonium
|
I think this title officially died. Right here. It's all your fault Iain.
|
|
|
|
Murgos
Terracotta Army
Posts: 7474
|
Is there a recording of the actual presentation? I'd like to hear it as there is some ambiguity in the slides that I am sure he talks about while presenting.
I don't know that the GPU is dead but I think that as separate piece of dedicated hardware it might have peaked, or at least, has significant competition coming in the next few years.
|
"You have all recieved youre last warning. I am in the process of currently tracking all of youre ips and pinging your home adressess. you should not have commencemed a war with me" - Aaron Rayburn
|
|
|
Sky
Terracotta Army
Posts: 32117
I love my TV an' hug my TV an' call it 'George'.
|
We're definitely in interesting times for processing power. The more advanced nvidia stuff is just sick, though. I have a hard time believing that is a dead-end tech, hardware programmable shaders would probably just get refined to run off cpu cores or something.
I do prefer the CPU-generated shadows in EQ2 over the GPU-generated ones.
|
|
|
|
Murgos
Terracotta Army
Posts: 7474
|
I don't think it's a dead end tech. I think it's just going to get largely incorporated onto the same die as the CPU.
|
"You have all recieved youre last warning. I am in the process of currently tracking all of youre ips and pinging your home adressess. you should not have commencemed a war with me" - Aaron Rayburn
|
|
|
Mrbloodworth
Terracotta Army
Posts: 15148
|
I dont think its anywhere near dead in real time rendering, as he points out. However, for high resolution CG, maybe, but not forever. A good deal of rendering packages use CPU based solutions, some use both. I think the main issue here is, movie houses typically use home brew renders, this is mostly what that article is about. And a good number of them are overkill, especially when movies like this can be done with off the shelf software, at times, game software.
|
|
« Last Edit: August 13, 2009, 09:28:59 AM by Mrbloodworth »
|
|
|
|
|
Kitsune
Terracotta Army
Posts: 2406
|
Given that both Windows 7 and OS 10.6 use the GPU for added processor oomph, I'd say it's sort of the very opposite of dead. Processors are getting more and more powerful by the minute, yes, but you can't escape the simple fact that two big processors working in your computer is better than one. There may come a time when one hundred-core CPU is more economically viable than a CPU and GPU working together, but until then I very much doubt that nVidia is going quietly into the night.
|
|
|
|
Sky
Terracotta Army
Posts: 32117
I love my TV an' hug my TV an' call it 'George'.
|
What are high end transfer rates and throughput on a decent FSB vs a PCIe interface?
|
|
|
|
Miguel
Terracotta Army
Posts: 1298
कुशल
|
What are high end transfer rates and throughput on a decent FSB vs a PCIe interface? Not directly applicable, since not all graphics data is transferred across PCIe. A more apt comparison is CPU memory bandwidth vs. GPU framebuffer bandwidth. The fastest Nehalem processors can have peak memory BW in the 30+ GB/s range. I think the memory bandwidth king right now is probably an ATI RV7xx class GPU with GDDR5: this peaks at 120+ GB/s. Most of this is due to DDR3 versus GDDR5: each DIMM is 64 bits running at about 670 MHz tops. For triple channel this is 192 bits at 670Mhz. GDDR5 interfaces are often 256 bits running at 2 GHz DDR. From a framebuffer BW perspective, CPU's don't even come close to raw transfer performance, however they are also general purpose, whereas FB accesses are tightly regulated by the GPU hardware.
|
“We have competent people thinking about this stuff. We’re not just making shit up.” -Neil deGrasse Tyson
|
|
|
Trippy
Administrator
Posts: 23657
|
Okay Sweeney's presentation leaves out one very important detail, though his audience would know this.
But first the background:
Because of the differences in they way they are performed at the chip level, math operations on CPUs/GPUs are separated into two categories: integer operations, and floating point (decimal) operations.
CPUs/GPUs can be optimized to handle one set (integer or FP) better than the other. You have a finite amount of space on the chip die (surface) so you (the chip designer) get to decide how much "real estate" to devote to integer calculations versus FP calculations.
Your standard desktop apps (Word, Outlook, IE/Firefox, etc.) are almost entirely based on integer calculations. As such "general purpose" CPUs like the Athlon 64 and Intel Core favor integer operation speed over FP.
"Scientific" applications, which includes graphics rendering, are the reverse and are extremely floating point intensive. So there are CPUs that are optimized for FP operations for those sorts of applications. This is also why when you see the lastest and greatest supercomputer announced they always say how many floating point operations per second (FLOPS) it can perform .
Modern GPUs are effectively specialized FP CPUs.
So here's the key point he left out:
Current desktop CPUs do not have the FP power to handle the graphics rendering demanded by today's 3D games. E.g. a quad core i7 can do about 70 GFLOPS (70 billion FLOPS). A high-end desktop GPU can do over 1 TFLOP (a trillion FLOPS).
So very briefly summarizing the rest of his presentation:
In the future, some of the desktop CPUs will be able to do GPU-level FP operations. Intel's Larrabee architecture is the example given in the presentation (the Cell processor has this sort of specialized FP design as well but it's not something you can stick into a desktop PC at the moment).
General purpose CPUs give the graphics engine programmer much more flexibility and control over the rendering process and output. GPUs are too specialized and the limitations can, among other things, cause artifacts in the output.
So once desktop CPUs have the FP power of GPUs it makes sense to move back to doing the rendering on the CPU instead of having a dedicated GPU.
THE END
|
|
|
|
Goreschach
Terracotta Army
Posts: 1546
|
No, he isn't specifically talking about videocards going away. That picture was just basically a joke. It doesn't really matter if the monitor cord plugs into a little dingly or the motherboard, or a big fuckoff card that won't even fit in my case without liberal use of a dremel, or whatever.
What he's talking about is the standard fixed graphics pipeline concept that's been used for the past 10 or 15 years being replaced by more general purpose architecture. When hardware and software rendering are being discussed, 'hardware' is really referring to the typical kind of GPU where parts of the rendering process like culling, z-buffering, and rasterization are physically built into the transistor logic. 'Software' is really just when you implement the entire renderer in non-specific hardware. A CPU doesn't have dedicated logic for things like texture filtering, but you can go ahead and program that stuff on it, anyway. The pdf is basically talking about why, going into the future, we'll be replacing those specialized, highly optimized, GPUs with general purpose vector processing hardware.
Personally, I'd love to have high end graphics hardware being integrated on the motherboards, as it would make sff cases so much easier. But I really don't see that happening for a long time.
|
|
|
|
fuser
Terracotta Army
Posts: 1572
|
Okay Sweeney's presentation leaves out one very important detail, though his audience would know this.
Intel's Larrabee architecture is the example given in the presentation (the Cell processor has this sort of specialized FP design as well but it's not something you can stick into a desktop PC at the moment).
Yeah he was glossing over the whole aim of a desktop CPU and how its technically not aimed at FP operation. I'd say other things that kill it besides weak FPU is the pipeline depth on a modern CPU. Whats the depth on a SP in a G200? But hopefully during the upcoming few years the API's can take advantage of a CPU cores for non heavy FPU operations to say offset a dedicated piece of hardware ie Larrabee. Larrabee will be interesting to watch as, at its core its a software rendering + micro os solution + ton's of short pipeline x86 compatible processors. Which technically is his point 
|
|
|
|
Typhon
Terracotta Army
Posts: 2493
|
[...] So once desktop CPUs have the FP power of GPUs it makes sense to move back to doing the rendering on the CPU instead of having a dedicated GPU.
THE END
It's a weird argument to be making (yes, I realize it's not your argument) because the CPU is also doing a number of other things besides rendering graphics - like maintaining game state and executing logic trees. If the argument was; soon we'll see cpu makers creating multi-core CPUs with some specialized for FP and some specialized for integer, but both still as generically programmable as today and doubling the number of cores in a CPU then it makes sense that GPUs would not be necessary to play games that are out today. But there's nothing to prevent GPU makers from changing their architecture to a more configurable architecture - which still frees the CPU to do other things besides push pixels at a monitor. It's weird that he doesn't just say, "GPU manufacturers should start creating more generalized architecture so that it's easier for me to do my job."
|
|
|
|
Miguel
Terracotta Army
Posts: 1298
कुशल
|
It's weird that he doesn't just say, "GPU manufacturers should start creating more generalized architecture so that it's easier for me to do my job." Probably because that's basically what a CPU is already doing. ;) So either a) CPU's can adopt more GPU like functionality, or b) GPU's can generalize. I think he's saying that rendering technologies can more easily adapt to general purpose CPU's rather than fixed-function GPU pipelines. Option A is Larrabee. It uses the same x86 instruction set everyone already uses with SIMD extensions to make it more vector-processing friendly, and a shared cache for inter-core data passing. It's got lots of cores to do work, and those cores can utilize the same code generation tools everyone is familiar with, as opposed to CUDA and OpenCL.
|
“We have competent people thinking about this stuff. We’re not just making shit up.” -Neil deGrasse Tyson
|
|
|
Mrbloodworth
Terracotta Army
Posts: 15148
|
uh, was he even talking about desktop boxes? I thought he was talking about graphic workstations. Maybe I need to read it again.
|
|
|
|
Trippy
Administrator
Posts: 23657
|
uh, was he even talking about desktop boxes? I thought he was talking about graphic workstations. Maybe I need to read it again.
Yes he is but he's talking about what things might look starting around 2012 or so. Larrabee which will be released in 2010 is being designed as a GPU (though more flexible than current GPUs) rather than a general-purpose CPU that you could also use as a GPU. So it's at least a generation after that chip that you might even see a desktop CPU that could also take on the role of a GPU.
|
|
|
|
sinij
Terracotta Army
Posts: 2597
|
Not everyone needs high-end graphic processing, as such I don't see CPUs designed around it. They might get good enough to put cheapo built-in video cards out of buisness, but anything else will be massive hurdle to producing CPUs at attractive price point to typical consumer/small business.
What is more likely to happen is moving back to co-processor architecture, where instead of card, you stick specialized chip into your machine.
Alternatively GPUs get fast enough and generalized enough that all processing gets moved onto them.
|
Eternity is a very long time, especially towards the end.
|
|
|
fuser
Terracotta Army
Posts: 1572
|
Larrabee which will be released in 2010 is being designed as a GPU (though more flexible than current GPUs) rather than a general-purpose CPU that you could also use as a GPU. So it's at least a generation after that chip that you might even see a desktop CPU that could also take on the role of a GPU.
Yep it's way way out plus Tim Sweeney has been saying this for the past ~two years ( 1 2). It's nothing earth shattering new but it is getting closer and closer. Larrabee is really interesting and not trying to sound like I'm fawning over it but it has some really unique "profiling" etc covered in this ars article. What would be really interesting is the next generation console fight and if they use a Larrabee for all in one graphics + os processor vs Nvidia and AMD.
|
|
|
|
Margalis
Terracotta Army
Posts: 12335
|
The pendulum is always swinging between more specialized and more general hardware. Look at memory for example: unified, not unified, unified, not unified.
The grass is always greener on the other side of the fence.
|
vampirehipi23: I would enjoy a book written by a monkey and turned into a movie rather than this.
|
|
|
Sky
Terracotta Army
Posts: 32117
I love my TV an' hug my TV an' call it 'George'.
|
GDDR5 interfaces are often 256 bits running at 2 GHz DDR.
As an aside, does anyone own a card with less that's less than 256 bits wide? I know at least as far back as the 9800 pro I had a 256 bit memory controller and it made a huuuge difference over an identical computer I had built for a friend who cheaped out with a different card (may have even been 64 bit at the time but I think 128?).
|
|
|
|
Murgos
Terracotta Army
Posts: 7474
|
Well, part of the problem with Sweeney's hypothesis (that it all should be software anyway) is that it will always be feasible for a hardware vendor to optimize part of the path so as to improve performance which then makes supporting that hardware a path of least resistance option.
Most of what he wrote, IMO, comes down to; do you want to the hardware to lead the software or the software to lead the hardware?
I would imagine, that to Sweeney's POV, it would be better to have software practices dictate the hardware architecture (For example his "I want to use massively parallel vector calculations" example) so that way his engineers can dork around with new data structures and programming paradigms and then the next gen hardware will optimize it.
It sounds nice but real world concerns about Hardware development process lead me to think it's not a very realistic outlook.
|
"You have all recieved youre last warning. I am in the process of currently tracking all of youre ips and pinging your home adressess. you should not have commencemed a war with me" - Aaron Rayburn
|
|
|
Miguel
Terracotta Army
Posts: 1298
कुशल
|
The grass is always greener on the other side of the fence rendered with a tile-based raytracer. FIFY in the spirit of this discussion! ;) Raytraced scenes such as this are almost to the point where they are indistinguishable from reality.
|
“We have competent people thinking about this stuff. We’re not just making shit up.” -Neil deGrasse Tyson
|
|
|
Trippy
Administrator
Posts: 23657
|
The grass is always greener on the other side of the fence rendered with a tile-based raytracer. FIFY in the spirit of this discussion! ;) Raytraced scenes such as this are almost to the point where they are indistinguishable from reality. Ray tracing by itself is insufficient to properly capture all the properties of light in such a scene. "Global Illumination" is the name given to the various techniques (which includes ray tracing) to more realistically rendering light and its effects in such scenes. http://en.wikipedia.org/wiki/Global_illuminationOn the other hand ray tracing can be used to render photorealistic scenes when the scene is setup specially for that. This is the classic example from 1984 (!) that shows this (scroll down to the last image): http://graphics.pixar.com/library/DistributedRayTracing/paper.pdf
|
|
|
|
|
Pages: [1]
|
|
|
 |