September 16th, 2008, 07:27 PM
How to limit the running time of an algorithm?
Ok let's throw this one out there.
If i'm trying to implement an algorithm for a game, but it's running time is not deterministic, e.g. like an a* or some other path finding algo, how should i code it such that it can execute across a couple of game ticks, rather than try to wait for it to be solved?
September 16th, 2008, 10:45 PM
You could program it such that a call to run the algorithm invokes its execution as a child process of the parent program. That way you don't have to 'wait' for the algorithm to finish, before continuing the rest of the program. Implementation of it would obviously depend on the programming language you are using.
The final approach really depends how you want it to run.
Last edited by Muzza; September 16th, 2008 at 10:48 PM.
September 17th, 2008, 12:18 AM
Let's say i don't want to fork anything to handle this, are there any techniques to accomplish this? How do large games handle this? Or do they just allow it to impact the gameplay.
September 17th, 2008, 12:27 AM
Not really up with how 'large' games handle it. Kind of really depends what the algorithm is trying to achieve. ie how does it impact the game? Is it randomly invoked, or is it invoked at a particular time. Is it invoked by a praticular action from a user, or is it user independant?
September 18th, 2008, 12:37 AM
i guess in an event driven model where the user can throw up events such as keystrokes, mouse clicks, that requires the program to handle them, it's not a good idea to handle them asynchronously because if they impact on updates to game data, you could get false results. e.g. a movement of a game unit away from a spot when it's already been decided that a collision has occurred.
I was thinking that setting an interrupt on whatever algo that is running, and storing that for the next game tick would be ideal. Just wondering if there are any other solutions out there.