• Register
Forum Thread
  Posts  
Thesis idea: offline collision detection (Forums : Ideas & Concepts : Thesis idea: offline collision detection) Locked
Thread Options
Aug 17 2014 Anchor

HiI'm currently working on my second degree in computer science, and I have a thesis idea - but neither I nor my advisor have enough experience in the field of collision detection to know whether it is relevant or interesting today.
The idea is, basically, to compute the point of collision of a specific pair of complex objects offline, in various situations - angles, velocities etc. (yes, MANY combinations and a lot of storage).When the bounding spheres of these objects are predicted to collide in real-time, the point of collision can be extracted from the pre-computed database, and no real-time detection is required.The goal, as you might have gathered, is to increase FPS, allow collisions between more complex objects, and to enable collisions in environments with limited resources.
Assume that I've though about the implementation, and have a rough idea of how to do this in a feasible way.
This technique obviously imposes various limitations. The main ones are that the objects must be available beforehand, must not distort in real-time, and it's only feasible to compute the collisions of few pairs of bodies.
My question is: is collision detection today, between highly complex meshes, still considered a problematic computational bottleneck? is it problematic enough that a very limited method that nevertheless allows very fast extraction of the point of collision is, in the very least, interesting?
Do you know whether this has been done before?What are the top-notch methods today to efficiently compute collisions of complex bodies?
Thanks so very much in advance,I'd be grateful for any thoughts on the matter.

Aug 17 2014 Anchor

The way I see it (and this is really just my personal opinion), is that artificial intelligence and physics are the two aspects of gaming that progressed the least over the last years.

I mean look at the graphics. We're getting closer and closer to photo-realistic/cinematic graphics in realtime. A lot of ground has been covered on that front over the course of the last (console) generation.

Let's take a look at AI. I feel like you could take a NPC from a modern game, swap the model for an old one, plug it into an old game and virtually nothing would change.

Physics? What was the last game that really revolutionized in-game physics? Half-Life 2, 10 years ago. Since then there were the occasional games that support GPU-accelerated PhysX to make worlds come to life (at quite a cost since graphics cards are usually at their limits with the visuals already). Other than that? I don't recall any.
Fluid-simulation is a pretty big deal though. And so is cloth.

Now to your idea:
The problem I see is that videogames are (what audio people call) non-linear. In the physics world this means that everything (dynamic) could collide with everything (dynamic and static) in every possible way at any given time.
Do you have a solution for this that really pays off in the end?

Say your solution results in significant performance improvements, but requires ridiculous amounts of space. -> Not worth it.
Say the non-linearity restricts the use of your technology so that it rarely helps. -> Not worth it.
Say it works in a lot of cases, results in significant improvements and the space requirements are humane too. -> Totally worth it.

If you were able to save a significant amount of CPU-cycles, they could be used somewhere else.
More sophisticated AI? Sure.
More complex audio systems? (Which is something I'd love to see more of in future). No problem.
Why not just increase the amount of physics objects?

Bottom line:
Significant performance improvements are always worth it. The CPU-cycles you save in one place, could be used pretty much everywhere else.

However...
Modern games do not use complex meshes for collision detection due to performance problems. There is a fine line between allowing game developers to use them and solving a problem you created yourself.
The question is how exactly you define "complex meshes". Do you mean small, but highly detailed objects or could this refer to objects that have few details, but are large?
The latter case is interesting, as you could "bake" physics for your entire level (or at least the immobile parts). Considering the size of an entire level, we're back at the non-linearity part. Can you calculate and store all this data efficiently? I mean that if every object could touch every square centimeter of your map at any angle that makes for a lot of data.

Again, this is just MY opinion and maybe your have thought it through already. It's an interesting idea nonetheless. :)

Aug 23 2014 Anchor

Thanks a lot for your input,
and indeed my goal is to enable this computation to go someplace else.
As I said, I lack knowledge in this field, and for example I did not know that "complex objects" (whatever that means) are generally still being avoided - that's good news for me.
I liked your comment: " There is a fine line between allowing game developers to use them and solving a problem you created yourself ".
It's a good point, and I partly agree. Partly - because some of the problems you create voluntarily are later on met involuntarily by others. Will the problem of collision detecting in more complex meshes in run-time be encountered in the next few years, or even now, in a way that simplifying the meshes is too much of a compromise? Sounds likely enough, wouldn't you agree?
By "complex meshes" I mean either of the two :)
And in either case, a lot of memory is required - no doubt about it. I plan to have that database saved on a server though. This might mean a bottleneck in networking, but again - if we divide the fields to ones that are constantly worked on and improved and ones that don't - networking is clearly in the first category. In any case I don't mean the client to either have to hold this database on disk or cache it in memory.

And now for your first point - does it work in a lot of cases. You obviously have a good point, unfortunately, and what you call "non linearity" is clearly the reason that this method hasn't been applied so far. I'm not sure yet just how non-linear it is. It will naturally become a bit less restricted as disk sizes and network bandwidth improve, but not very significantly in the next few years. It's also restricted in other ways - my bodies must be rigid, and I might have to slightly correct their trajectories so that they match pre-calculated ones. It's clearly not going to revolutionize physics simulation by itself. It MIGHT be a tool in the toolbox, applicable and beneficial in some certain conditions - where we're especially interested in precise collisions of some very few, rigid, complex meshes (perhaps there are more meshes around which collide in standard ways). Does that sound good enough to you?

BTW, I'd be very grateful if you could direct me to some of the currently-used algorithms for precise collision detection! (either on cpu or gpu)
If I go down this road, i'm going to have 1) an algorithm to apply offline (which can be non real time), 2) a benchmark.

Aug 26 2014 Anchor

Well, there's the QuickHull algorithm that Valve (and I think Havok) uses. (Valve is building a new physics engine for Source 2 btw.) The Wikipedia article seems to be a bit short though.
Other than that I remember Havok doing presentations about the future of physics engines on this years GDC (one article about it). Maybe the slides are somewhere to be found. You should also ask on gamedev.net.

Also: I don't think that physics over network will work good enough due to the latency.
The problem there is that (as the Havok guy said) physics affect everything. Everything from character movement, over AI to rendering needs the physics calculations to be completed before they can be done.

(Good) games try to work with a frametime of less than 16 ms. That equals +60fps. 16ms is a damn low latency. For reference: Pinging my local router already takes 2-4 ms.
You need to send data over the network, wait, get data back and then you can start doing the game logic.

Sep 2 2014 Anchor

The sniper, I agree with everything you said brother.. great posts.. just 1 factual error.. last game to revolutionize physics was MAX PAYNE 2 using the HAVOK engine.. first game to use those REAL type of physics for all gaming items.. (i remember playing for 2 hours JUST shooting physics driven objects around the room, was such a departure from usual game physics) Half life 2 came out a year later.

To the guy who started this thread... I may be missing something here, im no genius by a long shot, i barely breath without having concentrating hard, but it does seem to me that you would run into a couple of problems.. 2 biggies being these.. 1. FULL DATABASE!! - Say you do manage to work out a way to record the STAGGERING!!!! amount of computations needed to facilitate this database, how would you get the game to run through EVERY!!! scenario that COULD!! happen? you could run the game a billion times and record every time and still not get everything.. Even thinking on small scale, a single ROOM will have billions of different outcomes.. times that by an entire level.. its beyond my comprehension.. perhaps the whole thing is :) and im not making sense.. but ill continue none the less.. number 2. SIZE MATTERS!! :) - Even if you do SOMEHOW manage this gothmogian task of recorded this database, i have a feeling that it will require a substantial amount of space to store it.. this then brings up the matter of SEARCHING through it.. which , depending on the size, may add to the overhead SO much that its simply faster to calculate in real time... neways.. thats my 2 pence.. im not sure if ive made myself look an idiot.. nothing new if i have :) its a nice idea.. and i hope im wrong.. but after all said and done sniper is correct.. the state of the AI in games is still the same as 15 years ago.. which is a tragedy considering if they had great AI, the games would be 1000% more fun to play..

Sep 3 2014 Anchor

It seems unfeasible to calculate every possible scenario for collision detection. Just think how many ways a single object can be rotated alone! Even if a database with all these calculations could be constructed, It would likely take longer to find the calculation in the database than to do the calculation. Most databases search in log(n) time, if n is sufficiently large, it will take longer than even using an (n^2) collision detection algorithm with a small n. Though I'm not sure of the precise relationship between the number of nodes in a mesh, and the number of ways in which it can collide. It may be viable for very complex meshes.

Storing precalculated values for physics could still be useful though. It may not be necessary to store every way in which a mesh can collide with another mesh, but to store some number of equidistant samples, whose values can be easily interpolated for fast approximations. Other calculations that are done frequently in physics calculations could also be stored, for example performing rotations on an object. To avoid network delay, a small pool of these calculations could be stored client side, representing the most common calculations of the client at that time, if a calculation isn't used frequently enough, it can be removed from the pool and some other calculation brought in from the server.

Sep 3 2014 Anchor

@TheCreator - Try reading all posts, all you did was echo exactly what i wrote.. and 2ndly.. The word approximately shouldnt be used when talking about collision detection, considering good/bad detection can mean the difference between a smooth flowing feel and a buggy POS.

Sep 3 2014 Anchor

1) The sniper - a very valid point about latency. my ping to google for example is 100ms roundtrip. Such a server would only be helpful for collisions that can be anticipated 5 frames ahead of time, which may or may not be very common, depending on the game and situation. If most objects are under predictable forces (i.e. gravity), I thought I'd check every two object trajectories for predicted collisions (n choose 2) between the bounding spheres every once in a while, and ask the server for detailed collision information regarding the few first ones. once two objects have collided and changed their trajectories, I only have to check their new trajectories for collisions with the rest (2n) and again ask the server for info about the oncoming ones.
A local server on the other hand is much more versatile - I don't even see why it'd be a problem to use it for collisions in the current or next frame. Obviously while I wait for the server to lookup its table and return an answer for a collision query I don't have to be idle - the game goes on with AI, rendering, timed updates etc., and plugs in the collision information when it returns. Over the next few years, remote servers outside the LAN will become more relevant to this kind of run-time assistance. Will be glad to hear your thoughts on this.

2) Thanks for the reference to quickHull - that about puts things in perspective regarding current approaches. do you have anything for non-convex-objects? or are these rarely dealt with for computational reasons?

3) Now back to the size of the DB. Obviously I don't mean to save collisions for what you call "every" combination of the collision parameters. What's "every"? every possible floating point value stored in 4 bytes? No, I mean to calculate it for discrete values, and very few of them actually. More like 20 possible values for one angle for example. That doesn't mean I can interpolate the collision data however, unless, perhaps, we're dealing with convex objects, in which such an interpolation might look okay (correct me if you have an idea for somehow using nearset-neighbours collision information for any given collision).
for non-convex objects the client will have to make sure that bounding spheres reach collision in one of the precalculated combinations. If the collision is predicted well ahead, that's easy to do - adjust the velocities and angular velocities ahead of collision time.
If the bodies are non-convex, and there is also not enough time to adjust their trajectories in a way that would go unnoticed, well I'd have to either interpolate (which might or might not look unprecise), or resolve the collision locally.

4) Regarding the search time in the DB. The creator, I'm curious - why do you say databases use in logn time? I was planning to use a hashmap, whose average lookup time is constant. Will be glad to hear your meaning.
But even logn is practically constant for any thinkable DB size. even for a DB with 10^10 entries (which is way, way more than I need), log_2(n) is thirty-something. I would think I'd have problems more with caching (putting chunks of the DB in and out of the server's memory) than with the on-paper number of steps to my desired entry. Hopefully though I can squeeze in the DB for two objects in a few gigabytes, so loading the DB into memory when the server comes up and having it stay (mostly) there might be fine.

Many thanks for your ongoing input, it helps a LOT.

Sep 3 2014 Anchor

Your idea contradicts itself: you say all possible collisions will be available. You then say when a bounding area is hit the computer calculates exactly HOW they will collide, then loads up a table of predicted results. To do that the computer must figure out the result of all the variables in the collision, then it plays back the data. Since it already knows the results to figure out which table calculation to use, it's figuring out the results twice: once in real time to know which collision to play, and once ahead of time to play a pre-calculated animation on all the objects.

That has been done, it's been done for decades. Those are just pre-rendered animations. IE hit a guy in the arm, play animation arm_ouch. Hit in head, play head_ouch. Except you're going to get many more animations, so instead of having 25 different pain animations, you want thousands.

With almost any engine since Quake that can be done, you just need to pre-render out the animations and have the system autochoose which ones to use. Many 3D modeling programs (some back to the late 90's at least) could even be scripted to pre-render all possible animations.

Because of the time it takes to do all this and check it, that's why it's been handed over to CPU/GPU processing. It's a lot faster and uses a lot less memory then doing it all by hand.

Sep 3 2014 Anchor

@rugrat - your idea sounds like a flight of fancy given the scope of modern games today.. having said that, every great idea or invention in the history of man was also (at one time or another) called a flight of fancy... so dont let mere conjecture put you off in any way. You seem like a smart guy, I sincerely hope you get something resembling a demo to show us all the thesis in motion, either way, please keep us all updated on your progress. cheers.

Sep 3 2014 Anchor

I quote log(n) time for databases, because I think it is the most common, and is the running time for most implementations of SQL. However hash maps can produce constant time searches,(usually at the cost of higher memory usage, but not in the best case). Either way search time is only a real problem if the database is absurdly large (i.e. the number of possible values of a 16 byte quaternion). Also I agree that the margin of error for interpolation would be larger for non-convex objects, though I don't know how big that margin is, it would vary based on the number of samples. It would be interesting to see how many samples are needed to produce a margin of error below human perception.

I'm curious as to what keys you would use to search the database. something like (position, rotation, and model) for each object, or something that can be used to predict collisions like (velocity, position, angular velocity, angle, model)?

Reply to thread
click to sign in and post

Only registered members can share their thoughts. So come on! Join the community today (totally free - or sign in with your social account on the right) and join in the conversation.