I really need to get some gameplay going. I don’t particularly want to spend time on temporary enemy logic, so I will just go ahead and write the “full” AI from the start. I expect different enemies to have different logic, so I will keep my AI modular and define key stuff in assets. I’ll use a top-down design for this, starting with the broadest logic and descending into details.
First order of business, a brain definition:
Which is referenced by the enemy mob (definition):
An enemy can then have a brain class, which uses the definition to decide what to do.
There’s many approaches I could take, but I think I will just make a simple state machine. You could think of it as a primitive behaviour tree, though I will likely not bother with conditional logic. Most enemies would die in seconds, so what’s the point. May be if I did bosses or something later. Anyway, at its simplest, the brain is just a bunch of concrete states:
And I can add transitions to these states with triggers:
I am not going to bother with any sort of fancy node graph editor for this. It’s just not worth it for me to spend that long on an editor tool. Even if the above logic has 2 errors in it that took me a whole two minutes to debug and fix.
Now the question is how do I “encode” each state. I want modularity and reusability, but at the same time each state is very implementation-specific. As I always say: when in doubt, make an asset:
I don’t actually say that. The tortured “lobe” metaphor here is the part of the brain responsible for a certain state. The idea here is that different mobs/brains could act differently, but the actual states share the same functionality. So I can just link the reusable lobe to the state:
And the lobe itself will provide the information of how the enemy should behave.
At a high level, I don’t care about exact values, like timers and or what it means to find of lose an “enemy”. (In fact, I am only using enums instead of strings for my own convenience. Technically, it’s just “State #1”, “State #2”, etc.) Enemies themselves will provide details when the brain inquires them. It other words, the brain will use its owner’s “sensors”, which may range from simple checks to complex algorithms. The enemies can have their own internal values, thresholds, durations, etc. as appropriate for that enemy type. For example, the timeout timers:
So when the brain needs to check if it should switch from seeking back to idle, it will ask the mob if it has reached the timer. However, the basic idea here is to make sure the enemies have no “if” statements for their logic. That is, they make no decisions. The brain will handle any sort of state transitions.
Anyway, here are AIs switching states when they can or cannot “see” the player:
Now to add some (Bresenham’s) line of sight to enemies:
With this I have enough AI decision making, and it is time to make my enemies attack the player properly. For starters, I will start easy and simply make them shoot the player (since this doesn’t involve any direct movement). First, I’ll give enemies weapons:
Then have the aggressive lobe specify that the action enemies should take is to fire:
And then have the enemies fire their weapon towards the player (same technical logic as the player already uses):
And there we go, pretty straight forward. Of course, there is no “higher” AI logic here and the “AI” is basically a ruleset. But I am not simulating deep AI here, I am writing a video game and this is good enough for the framework. I can already add more states, triggers, actions, etc.
So now I can reenable enemy path-finding to player, but only when the current state wants to. For that I can tell the seeking state (‘s lobe) to request enemy to go to last seen enemy location (with the caveat that the enemy records this properly):
And the state exit now happens when the trail is lost, which currently means not finding the player where they previously were:
Final tweak is that my “line of sight” is actually horrible and I need to do the same algorithm as for path-finding’s path shortening to actually determine where viable lines of sight are. Otherwise, I end up with enemies shooting walls thinking they can see the player (green – can see, red – cannot):
So with a better line of sight, I can have enemies seek the player properly (instead of walking a little and then shooting a wall):
This, however is not enough for AI, because enemies happily shoot each other:
The problem is that “see” and “shoot” are different things that I am treating the same. So I need to make enemies aware of this by having separate vision and firing parameters for the line of sight checking (yellow – can see, but not shoot through):
That’s easy though. Getting the AIs to obey this is trickier and I’ll need more states and actions here. I need to split my aggressive state into sighting (can see player, cannot shoot) and attacking (can see and attack player) states:
Here’s the result (far enemy keeps losing sight, so they go to last player’s coord, and sight him soon after):
To reflect some, I really like this approach, because I only have to worry about adding individual bits to the logic, like a trigger check here or a state change there. I never have to write or modify large parts of code without being able to test in-between. I can catch and fix any issues quickly. And it makes debugging standardized and simpler. For example, enemies that were seeking the player could arrive at where they last saw the player and still not be able to shoot them. So they would just sit in sighting state forever. A fix was to use the trigger I already had to have them seek the player instead:
Careful examination would reveal that the seeking state immediately sees the player and goes back into sighting state. But this time, the seeking recorded a new player location, so the AI ends up pathing to the player. This is not that elegant if you consider all the redundant checks and “stupid” in-between decisions. But it’s elegant in the sense that it resolves all errors itself and in the end, only the final decision matters. A behaviour tree may have produces the “final” result in one step, while my state machine may take two or three.
Anyhow, this concludes the big framework of the AI. I might work further on the individual lobes and externalize a bunch of common properties. But, for now, my AIs are already quite vicious and relentless. In fact, next efforts will likely go into dumbing them down.