Welcome, Guest. Please login or register.
April 30, 2024, 09:58:10 AM

Login with username, password and session length

Search:     Advanced search
we're back, baby
*
Home Help Search Login Register
f13.net  |  f13.net General Forums  |  Gaming  |  Topic: Quad Core gaming potential 0 Members and 1 Guest are viewing this topic.
Pages: [1] Go Down Print
Author Topic: Quad Core gaming potential  (Read 4282 times)
Engels
Terracotta Army
Posts: 9029

inflicts shingles.


on: March 31, 2007, 10:03:42 PM

Just caught this on teh youtube. I wonder how many future games are really going to be designed around multiple cpus, or if this is going to be the exception to the rule.

I should get back to nature, too.  You know, like going to a shop for groceries instead of the computer.  Maybe a condo in the woods that doesn't even have a health club or restaurant attached.  Buy a car with only two cup holders or something. -Signe

I LIKE being bounced around by Tonkors. - Lantyssa

Babies shooting themselves in the head is the state bird of West Virginia. - schild
Strazos
Greetings from the Slave Coast
Posts: 15542

The World's Worst Game: Curry or Covid


Reply #1 on: March 31, 2007, 10:19:45 PM

Well, seeing as multi-core is the future, I'd venture to guess all of them, eventually.

Fear the Backstab!
"Plato said the virtuous man is at all times ready for a grammar snake attack." - we are lesion
"Hell is other people." -Sartre
Trippy
Administrator
Posts: 23623


Reply #2 on: March 31, 2007, 10:44:03 PM

All that nice eye candy and they can't even get the run animation to look right rolleyes

But to answer your question multi-core is the present. The PS3 and Xbox 360 are both multi-core and you'd be hard pressed to find a new PC over $700 or so that doesn't have a dual-core CPU, and in another year or so quad-core will be the standard if Intel and AMD have their way. Developers are still getting used to this sort of stuff so it'll still be awhile before PC games support multi-cores/multi-CPUs as a standard feature.
Engels
Terracotta Army
Posts: 9029

inflicts shingles.


Reply #3 on: April 01, 2007, 01:19:12 AM

I guess I should rephrase; do we think that PC game makers are going to go through the effort needed to code for multiple threads? There are various ways of doing so, but its my understanding that the 'purer' forms of multi thread process coding is a radical departure from conventional gaming code, and a lot harder to do. I've heard that its even harder to do so for a windows platform (no idea about Vista).

Could not there be an alternate scenario of work arounds that permits coders to continue to use their old methodologies, not truely taking advantage of the possibilities? It wouldn't be the first time that a piece of computer technology is produced that the remaining software and/or hardware doesn't take advantage of. How many years passed before 8x AGP was even close to used, just to be ditched by PCI-Express before it really came into its own? The hardware industry seems a fickle beast, and the legions of coders a slow force to mobilize.

I should get back to nature, too.  You know, like going to a shop for groceries instead of the computer.  Maybe a condo in the woods that doesn't even have a health club or restaurant attached.  Buy a car with only two cup holders or something. -Signe

I LIKE being bounced around by Tonkors. - Lantyssa

Babies shooting themselves in the head is the state bird of West Virginia. - schild
Trippy
Administrator
Posts: 23623


Reply #4 on: April 01, 2007, 02:16:52 AM

I guess I should rephrase; do we think that PC game makers are going to go through the effort needed to code for multiple threads?
Yes we do.

1) It's in Microsoft's best interest for people to keep gaming on their PCs. They currently either lose or make a small amount of money in their console division (they don't break out the console stuff separate from the entire Entertainment group). On the other hand in the OS division they are making so much money they literally have no idea what to do with it all. Until they figure out how to turn their consoles into money printing machines like their OS and Office businesses they want PC gamers to continue buying/upgrading their machines which means buying/upgrading their OSes. This means that the stuff they do on the software side to help Xbox 360 developers program the triple-core Xbox 360 is going to trickle down to the PC developers in the form of development tools and libraries to make it easier to write games on the PC that take advantage of multiple-cores.

2) Many of the larger development studios do both PC and console development. As people move from project to project the knowledge the console developers gain from programming on the multi-core Xbox 360s and PS3s will transfer over to the PC game developers.

3) It's a competitive advantage to be able to support multi-cores/multi-CPUs on the PC. E.g. with CoX you get a huge frame rate increase on a multi-processor machine. This means people with lower-end video cards can enjoy the game if they have a dual-core CPU. It also allows developers to put in more advanced features such as better physics or AI which will help them compete in the marketplace when mutli-cores become standard in the installed base of gaming PCs. Conversely those development studios that can't write multi-threaded games are going to have trouble selling their game ideas to publishers in the future.



Big Gulp
Terracotta Army
Posts: 3275


Reply #5 on: April 01, 2007, 06:36:41 AM

I have some doubts.  The C2D's are all 64 bit, most of the AMD processors are, and what's the state of x64 gaming?  Pretty piss poor.  There aren't too many games designed to take advantage of it, most of them just revert to running the same way they would on a 32 bit processor.  I'm running an x86 version of Windows right now even though I could run x64 just because it's more convenient for me.

Seems to me they've been fairly lax in truly taking advantage of even fairly prominent technology.
Trippy
Administrator
Posts: 23623


Reply #6 on: April 01, 2007, 07:15:15 AM

I have some doubts.  The C2D's are all 64 bit, most of the AMD processors are, and what's the state of x64 gaming?  Pretty piss poor.  There aren't too many games designed to take advantage of it, most of them just revert to running the same way they would on a 32 bit processor.  I'm running an x86 version of Windows right now even though I could run x64 just because it's more convenient for me.

Seems to me they've been fairly lax in truly taking advantage of even fairly prominent technology.
You are comparing apples and oranges. How people are running the 64-bit version of Windows XP? Just about nobody. How many people here are playing games on Vista other than Xerapis?

64-bit gives you a (much) larger memory space and slightly faster operations in some situations. That's nothing like the performance advantage multiple CPUs give you. The modest performance benefits coupled with the lack of people using 64-bit XP is why there's only a handful of games that have 64-bit versions available for them right now. Of course as the installed based slowly moves over to Vista that'll change. But again the performance benefits are minimal unless you are playing a monstrosity such as Vanguard which could probably use 8 to 16 GB of RAM all by itself if it had that much available to it.

Edit: missing words
« Last Edit: April 01, 2007, 07:54:20 PM by Trippy »
Murgos
Terracotta Army
Posts: 7474


Reply #7 on: April 01, 2007, 08:34:42 AM

I have some doubts.  The C2D's are all 64 bit, most of the AMD processors are, and what's the state of x64 gaming?  Pretty piss poor.  There aren't too many games designed to take advantage of it, most of them just revert to running the same way they would on a 32 bit processor.  I'm running an x86 version of Windows right now even though I could run x64 just because it's more convenient for me.

Seems to me they've been fairly lax in truly taking advantage of even fairly prominent technology.

You probably won't start to see a large benefit to 64 bit computing until developers start using vast amounts of RAM to supply data to multiple processors...

The real way multi-processor computing for games will probably end up being done is that the engine creators (Unreal, Quake and etc...) will adopt all the best practices and development will probably be mostly transparent to the game developers.

"You have all recieved youre last warning. I am in the process of currently tracking all of youre ips and pinging your home adressess. you should not have commencemed a war with me" - Aaron Rayburn
Big Gulp
Terracotta Army
Posts: 3275


Reply #8 on: April 01, 2007, 01:29:49 PM

How many people here are playing games on Vista other than Xerapis?


Me.  Except, I've pretty much abandoned PC gaming.  The occasional game of Medieval II probably doesn't count as being a "PC gamer" any more.
Samwise
Moderator
Posts: 19226

sentient yeast infection


WWW
Reply #9 on: April 01, 2007, 02:19:03 PM

The real way multi-processor computing for games will probably end up being done is that the engine creators (Unreal, Quake and etc...) will adopt all the best practices and development will probably be mostly transparent to the game developers.

Bingo.

Another possibility, even lower-level, is graphics, physics, or AI libraries that are built to take advantage of CPUs not in use by the rest of the game.  Much like how graphics libraries currently farm work out to a dedicated GPU, and how we're starting to see rumblings of dedicated physics and AI cards as well.  As multi-CPU setups get more common that may all become irrelevant.

"I have not actually recommended many games, and I'll go on the record here saying my track record is probably best in the industry." - schild
stray
Terracotta Army
Posts: 16818

has an iMac.


Reply #10 on: April 01, 2007, 02:34:06 PM

I bet physics and ai cards end up disappearing somewhat, and more cores/cpu's doing the job like the consoles do. I do agree with the above about the big engine developers though. Also, as Trippy somewhat pointed out, the PS3 and 360 will be catalysts as well. Or at the very least, it forces developers to get their feet wet.
Krakrok
Terracotta Army
Posts: 2189


Reply #11 on: April 01, 2007, 06:43:34 PM


I saw an article (probably on slashdot) about a C or C++ compiler that compiles for multicore. It does all the multicore stuff on the back end so you don't have to worry about it.

Supreme Commander uses multicores.

I thought I saw something where Intel was working on 8 core CPUs where each CPU in the core could be customized (in the hardware) for a specific application (or something to that effect). The only article I've found so far that somewhat mentions it is here. Claims CPU and GPU on the same chip but not the same die.

On the previous page in that article it also talks about auto detecting single threaded applications which are using the first core heavily and speeding that core up while slowing the unused core. And keeping in the heat parameters of the complete CPU.

Here's a Intel press release which lists all the crap.
Trippy
Administrator
Posts: 23623


Reply #12 on: April 01, 2007, 07:42:27 PM

The real way multi-processor computing for games will probably end up being done is that the engine creators (Unreal, Quake and etc...) will adopt all the best practices and development will probably be mostly transparent to the game developers.
Bingo.
That already been happening for a while now. All the id engines from Quake 3 Arena on have been multi-threaded. Unreal Engine 3 is multi-threaded as well.


I saw an article (probably on slashdot) about a C or C++ compiler that compiles for multicore. It does all the multicore stuff on the back end so you don't have to worry about it.
If only it were that simple.
stray
Terracotta Army
Posts: 16818

has an iMac.


Reply #13 on: April 01, 2007, 09:12:20 PM

There is the Octopiler that IBM is cooking up, but that's Cell specific.

But I'll admit, I don't know shit about the actual process. I'm guessing how it helps split routines between cores is superficial, and much of the optimization would still need to be by hand.
Krakrok
Terracotta Army
Posts: 2189


Reply #14 on: April 02, 2007, 10:11:47 AM

Sairon
Terracotta Army
Posts: 866


Reply #15 on: April 02, 2007, 10:25:49 AM

There's been some talk for quite some time on how to harness all the cores optimally. People are talking about moving over to functional languages instead, since they organize the logic better for multiple processing units. Which I think is also the reason for F#. Since I don't have experience with functional programming languages at all I don't know for certain, but I don't think it will mix well with standard C++.
Samwise
Moderator
Posts: 19226

sentient yeast infection


WWW
Reply #16 on: April 02, 2007, 11:36:39 AM

Most of the problems encountered when doing multithreaded development have to do with state that's being modified by multiple threads concurrently.  Functional languages just tend to have less shared state.  You could get the same effect in C/C++ by not using any objects or global variables.   tongue

"I have not actually recommended many games, and I'll go on the record here saying my track record is probably best in the industry." - schild
Yegolev
Moderator
Posts: 24440

2/10 WOULD NOT INGEST


WWW
Reply #17 on: April 02, 2007, 01:16:53 PM

IBM has a C complier that does 64-bit and SMT and whatnot for AIX already.  As Trippy alluded, this does not mean that the programmer can just code away like normal and magically have a multithreaded program.  Something funny to consider is that we did not bother to have the AIX kernel run in 64-bit mode for a long time even though we were running 64-bit Oracle.  It's all about address space.

I will also agree with the prediction of the comeback of the general-purpose processor as SMT trickles down to the masses.  Four cheap general processors could beat a specific-purpose processor at much lower cost, depending and in theory and Intel masterplans and whatnot.  Also: processor virtualization.  Got a while before that drops down to the PC, I expect.

Why am I homeless?  Why do all you motherfuckers need homes is the real question.
They called it The Prayer, its answer was law
Mommy come back 'cause the water's all gone
Trippy
Administrator
Posts: 23623


Reply #18 on: April 02, 2007, 07:32:49 PM

As I said above, if only it were that simple. Here's what you have to do to use their compiler according to their white paper:

Quote
The process a programmer would use to develop
software using the sieve system is:
1. Take an existing non-parallel piece of
C++ software
2. Identify a section of the software that is
suitable for parallelization.
3. Mark the section with the sieve marker.
4. Mark any functions called by the section
with sieve function specifiers.
5. Compile the code and fix any errors
reported due to mis-matched sieve levels.
6. Run and test on a single-core processor.
7. Make any adaptations required to keep
the program working using sieve
semantics.
8. Run the compiler in a special mode that
advises the user of situations where
dependencies are blocking parallelism. 
9. Use the split/merge system to fix any
dependencies reported (see below).
10. Compile, run and test on a single-core
processor.
11. Once working on a single core processor,
compile, run, test and performance
analyse on a multi-core processor. 
12. Go back to step 2 if there are any more
sections suitable for parallelization.

That hardly seems like it "does all the multicore stuff on the back end so you don't have to worry about it."

Here's what Tim Sweeney from Epic had to say about OpenMP, a "competitor" to Sieve (though of course CodePlay claims they are better):
Quote
AnandTech: Did you make use of auto-parallelisation compiler technology (like the auto parallelisation found in Intel C++ compiler) to make the engine multithreaded?

Tim Sweeney: Auto-parallelization of C++ code is not a serious notion. This falls in the same category as the Intel compiler's strip-mining optimizations and other such tricks, which are designed to speed up one particular loop in one particular SpecFP benchmark. These techniques applied to C/C++ programs are completely infeasible on the scale of real applications.

AnandTech: What about OpenMP?

Tim Sweeney: There are two parts to implementing multithreading in an application. The first part is launching the threads and handing data to them; the second part is making the appropriate portions of your 500,000-line codebase thread-safe. OpenMP solves only the first problem. But that's the easy part - any idiot can launch lots of threads and hand data to them. Writing thread-safe code is the far harder engineering problem and OpenMP doesn't help with that.

In other words you can't just take any arbitrary piece of C/C++ code, run it through one of these "parallelizing" compilers and expect to get a multi-threaded application out the other end -- C/C++ coding just doesn't work that way.

Edit: Tim Sweeney not Mark Rein
« Last Edit: April 02, 2007, 07:38:05 PM by Trippy »
bhodi
Moderator
Posts: 6817

No lie.


Reply #19 on: April 02, 2007, 07:56:51 PM

Doesn't the DirectX API and MS MFC do auto-parallelization for you? It may not work on your main code, but I know that sound processing can be offloaded to a secondary processor, and if some video pre-processing can too, it's a step in the right direction. I'd think you could offload things like AI code, networking code, etc. etc.

I'm not a serious programmer, of course, more of a scripter... so excuse the stupid questions.
Quinton
Terracotta Army
Posts: 3332

is saving up his raid points for a fancy board title


Reply #20 on: April 02, 2007, 09:25:34 PM

I bet physics and ai cards end up disappearing somewhat, and more cores/cpu's doing the job like the consoles do. I do agree with the above about the big engine developers though. Also, as Trippy somewhat pointed out, the PS3 and 360 will be catalysts as well. Or at the very least, it forces developers to get their feet wet.

I'm very skeptical of dedicated physics and ai cards ever taking off -- too special-purpose and not worth the effort to support.  Multiple core is where we are and where we're going (at least for the time being) -- the number of gates you can put on a die keeps going up and multiple cores is a great way to take advantage of that.

Game programmers sure are a whiny bunch though! Everything is always too hard or too expensive to program ^^

I look at PS3 and see some really nifty architecture "wow, that looks like a blast to figure out how to make really fly" and gamesdev shops seem to just whine about it being different.   I assume there are some decent ones out there spending their time figuring out how to do super cool stuff with it instead of bitching about it.

-Q
Krakrok
Terracotta Army
Posts: 2189


Reply #21 on: April 02, 2007, 10:27:57 PM

Quote from: Tim Sweeney
any idiot can launch lots of threads and hand data to them

Woo, that's me. I've been cheating on threads with Delphi for 10 years.
Sairon
Terracotta Army
Posts: 866


Reply #22 on: April 07, 2007, 04:01:36 AM

I look at PS3 and see some really nifty architecture "wow, that looks like a blast to figure out how to make really fly" and gamesdev shops seem to just whine about it being different.   I assume there are some decent ones out there spending their time figuring out how to do super cool stuff with it instead of bitching about it.

I think it's mostly the execs who are bitching about the increased costs. Most techies I've heard of finds it intriguing to work on a new architecture. After all, this sort of problem solving is the fun part of programming. Routine work ftl.
Stephen Zepp
Developers
Posts: 1635

InstantAction


WWW
Reply #23 on: April 07, 2007, 04:47:13 PM

The real way multi-processor computing for games will probably end up being done is that the engine creators (Unreal, Quake and etc...) will adopt all the best practices and development will probably be mostly transparent to the game developers.

Bingo.

Another possibility, even lower-level, is graphics, physics, or AI libraries that are built to take advantage of CPUs not in use by the rest of the game.  Much like how graphics libraries currently farm work out to a dedicated GPU, and how we're starting to see rumblings of dedicated physics and AI cards as well.  As multi-CPU setups get more common that may all become irrelevant.

Jumping into the middle of a thread here so this may already have been summarized, but some points:

--Ageia's physics cards are by definition multi-cpu, and their API is written for it. It had 8 physics CPU's on board.
--Even without totally re-writing game engines, pushing off AI, physics, or even resource loading (moving stuff from CD to disk, and from disk to memory) is relatively easily multi-threaded, and makes for great end user appreciation (no more disk loading blocking your game play, etc, etc).
--many game engines are working out the best practices now. Hell, Torque Game Engine-Advanced's Atlas terrain is already fully multi-threaded, and even without a second CPU, gives 15% overall performance increase with huge capability gains--and we're definitely not the leading edge game engine company when it comes to R&D.

Rumors of War
Samwise
Moderator
Posts: 19226

sentient yeast infection


WWW
Reply #24 on: April 08, 2007, 10:49:16 AM

I look at PS3 and see some really nifty architecture "wow, that looks like a blast to figure out how to make really fly" and gamesdev shops seem to just whine about it being different.   I assume there are some decent ones out there spending their time figuring out how to do super cool stuff with it instead of bitching about it.

I think it's mostly the execs who are bitching about the increased costs. Most techies I've heard of finds it intriguing to work on a new architecture. After all, this sort of problem solving is the fun part of programming. Routine work ftl.

Porting isn't really all that fun.  Mostly it's just solving the same problem you already solved but having to do it in a slightly different way.  I suspect the people who are bitching are the ones who have already written their game for X architecture and have been told that it's supposed to be available on Y architecture within the month, and now they find out that rather than just flipping a few compiler settings and rebuilding, to get things running well on Y architecture they'll need to rewrite the engine.

"I have not actually recommended many games, and I'll go on the record here saying my track record is probably best in the industry." - schild
Pages: [1] Go Up Print 
f13.net  |  f13.net General Forums  |  Gaming  |  Topic: Quad Core gaming potential  
Jump to:  

Powered by SMF 1.1.10 | SMF © 2006-2009, Simple Machines LLC