Originally Posted By: jumpman
Hi everyone

Im looking for any advice, pseudo code, theory, macro-micro ideas when it comes to building an AI. I've made AI before in my game Swordlord, but the AI I am working on now is much more complex. In swordlord, the AI really was in only 2 states, randomly moving around and attacking. But in my new game, the AI will be doing this, as well as pathfinding/attacking/making decisions/animating etc.

Im building an AI that can pathfind around the level using A*, and when it gets into it's attack range, it will attack its target. This sounds easy, but it really isnt, especially because I have so many things going on in an AI. If the AI sees an enemy/player, it can pathfind to the player. But what if the player is not there at the end of the path? So the AI will do something else, a search or whatever. What if the AI sees a player thats not close to any nodes? Then the AI must go of into a manual movement outside of the pathfinding. I can already see the branching of the AI being very big in scope.

When you make your AI, do you do everything in one loop/while? Currently I have a separate pathing bot that just handles pathfinding. Do I integrate decision making into this bot? Or do I attach a secondary entity to this bot that handles decision making, and let the pathfind bot just handle movement?

Where should I fit in animation, animation blending and attacking? Should I have pathfinding/animation/decisions/state machines in one entity?

I like the idea of having two entities, one handling pathfinding, while the other handles animation and AI. But this also means they have to constantly communicate with each other. Ive done this on a smaller scale, but I can see if I want to expand the AI, I will be doing alot of hacks. On the other hand, doing everything in one loop/or working in state machines is easy, but that may require alot of duplicate code, ie state_1=pathfinding while attacking, state_2=pathfinding to retreat etc etc.



First of all, you'll want to split animation off the AI.
Animation should either react to movement or, in cases where animation drives movement via root motion or something, be driven by AI but not intermangled with it.
So your animation systems should be completely separated and accept inputs that control the animation states and blending etc of your entities. You can then drive those systems with movement controlled by AI, or directly drive them with AI.

For the AI itself, state machines get messy fast with more complex AIs, so you wanna look into alternatives. Behaviour trees are popular in AAA atm:

http://www.gamasutra.com/blogs/ChrisSimpson/20140717/221339/Behavior_trees_for_AI_How_they_work.php

Before behaviour trees, goal oriented planners were the thing:

http://alumni.media.mit.edu/~jorkin/gdc2006_orkin_jeff_fear.pdf

Now these are abstract concepts but there's numerous ways you could implement them in lite-c. Behaviour trees are often done in a very OOP-like fashion, with nodes corresponding to virtual functions being called on the AI agents and what not. You could implement that in lite-c using dispatch tables I thing (not actually sure if function pointers were a thing that worked in lite-c).

In general though for AI, you want to separate the concerns as much as possible. So splitting pathfinding, animation and decision making is a good line of thinking.