Welcome, Guest. Please login or register.
May 20, 2024, 05:32:24 PM

Login with username, password and session length

Search:     Advanced search
we're back, baby
*
Home Help Search Login Register
f13.net  |  f13.net General Forums  |  The Gaming Graveyard  |  MMOG Discussion  |  Eve Online  |  Topic: Live devblog - tons'o'changes 0 Members and 1 Guest are viewing this topic.
Pages: 1 2 [3] Go Down Print
Author Topic: Live devblog - tons'o'changes  (Read 21214 times)
Morat20
Terracotta Army
Posts: 18529


Reply #70 on: March 26, 2008, 04:26:31 PM

So their servers are running python  ACK!

Well, I guess CPU isn't the bottleneck of any MMO server (might be on an MMOFPS or something which actually needs to complex collision detection). Networks and databases are still slow.
Bottleneck is grid-loading, which is DB+network+usual lag.

DB, from their devblog on it, is insane. While I'm sure their SQL could use optimization (both in actual calls and how it's used in code), since that's true of everyone, the last bottleneck on SQL they were facing was on disk reads -- they were queuing up so many DB calls that while the DB engine was fine, the disks couldn't keep up. They swapped out for solid-state RAM (and did a number of other changes) and things were really smooth until people started tripling the number of pilots for fleet ops.

I think Bhodi was saying that they're working on drastically reducing the amount of data they need to shove down the pipe for grid-loading.
nurtsi
Terracotta Army
Posts: 291


Reply #71 on: March 27, 2008, 02:03:10 AM

I found an old presentation by CCP on stackless Python. One of the slides contains a diagram of the server as it was in Oct 2004. Anyway, even the client seems to be written in stackless Python (except for the parts that need to be fast like graphics etc).

Quote
(Edit to add:  I might be stupid, but I can't really find a big difference between microthread and multithreaded.)

You can compare microthreads and threads. You can say using multiple threads is multi-threaded. You can also say that using multiple microthreads is multi-threaded. The traditional multi-threading is that you have one process that uses multiple threads. With 'real' threads, the OS can allocate each of those threads to run on a separate core. In the case of stackless Python, all of those microthreads run inside one thread (and thus one core).

Why would you want to use microthreads then if they don't take advantage of the new cool multi-core hardware?

Microthreads in stackless Python are called tasklets. The reason you want to use them is because they are fast. Typically they are at least 10x faster than normal threads. Of course it would be cool if you could get those 10x faster tasklets to run on multiple cores as well. From what I have read, CCP has hired experts in the past to try to figure out ways around this, but the problem is that to get stackless Python run on multiple cores, you will lose all the benefits it brings that were the reason you wanted to use it in the first place.

There are some ways around this limitation of course. In order to use multiple cores with Python you have to use processes. You can launch multiple Python interpreters and each of them can run on a different core. But then you need inter-process communication which is a pain compared to just using multiple threads inside a single process (or multiple microthreads inside a single thread).

AFAIK: The EVE server has many CPUs for simulating the solar systems. Each CPU can simulate one solar system or multiple solar systems. But it can't simulate one solar system with multiple CPUs. Also, the server does not support dynamic load-balancing, i.e. the CPUs are allocated at downtime. So each day at downtime, they check which systems are empty or have very little people in them and put many of those systems to run on a single CPU. Then they see that Jita has crapload of people, so they allocate one CPU just to run Jita and nothing else etc.

This is why (I think) you can lag an entire region sometimes if you jump large amounts of people into a system that didn't have very many people in it during downtime. As the CPU suddenly has to do a lot more work (or wait for network/databases) every solar system simulated on that CPU suddenly feels it.
ajax34i
Terracotta Army
Posts: 2527


Reply #72 on: March 27, 2008, 07:57:31 AM

So, basically, there's not much that they can do to improve lag.  Biggest thing would be to somehow convince players to:

a.  not blob so much
b.  spread out instead of concentrating in Jita/Empire

Not sure if it's possible.
tazelbain
Terracotta Army
Posts: 6603

tazelbain


Reply #73 on: March 27, 2008, 08:00:17 AM

tax overcrowded regions.

"Me am play gods"
Simond
Terracotta Army
Posts: 6742


Reply #74 on: March 27, 2008, 11:07:01 AM

Revert the HP boosts on capitals, supercaps and POSes.
Knock 10% off all of their resistances, while they're at it.

"You're really a good person, aren't you? So, there's no path for you to take here. Go home. This isn't a place for someone like you."
Quinton
Terracotta Army
Posts: 3332

is saving up his raid points for a fancy board title


Reply #75 on: March 27, 2008, 11:54:46 AM

Microthreads in stackless Python are called tasklets. The reason you want to use them is because they are fast. Typically they are at least 10x faster than normal threads. Of course it would be cool if you could get those 10x faster tasklets to run on multiple cores as well. From what I have read, CCP has hired experts in the past to try to figure out ways around this, but the problem is that to get stackless Python run on multiple cores, you will lose all the benefits it brings that were the reason you wanted to use it in the first place.

The savings is gained by not having to do a full system call for context switches.  For example, on an ARM9 cpu (near and dear to my heart), you can switch contexts (save/restore registers, swap stacks) to another thread in 180 cycles.  Doing this at the kernel/syscall level (on linux) costs 10-20x as much.

The downside is that you get no kernel support so you have to jump through hoops on syscalls, do *all* IO asynch because any blocking operation blocks all threads, and often you tend to do cooperative threading because doing preemptive micro/userspace threads with timers is 1. gross 2. hard and 3. brings you more overhead.

Python is not inherently multi-core/multi-thread friendly.  It has one big "interpreter lock" that must be held while executing the bytecode which pretty much kills performance right there.  Moving from big global locking around the interpreter to lightweight synchronization is a really large and difficult change.  One of the things I will say about Java (which has plenty of horrible horrible issues) is that they built support for multithreaded runtimes into the VM design, so it Just Works -- I wish some more of the nice little interpretive languages did that.

- Q
Pages: 1 2 [3] Go Up Print 
f13.net  |  f13.net General Forums  |  The Gaming Graveyard  |  MMOG Discussion  |  Eve Online  |  Topic: Live devblog - tons'o'changes  
Jump to:  

Powered by SMF 1.1.10 | SMF © 2006-2009, Simple Machines LLC