Now that I have a bunch of enemies being spawned in the dungeon, I need to make them smarter. Technically, I need to extend their capabilities, but make their behaviour fun-er. I want to have high standards for my AIs, but at the end of the day they are cannon fodder. They should be dumb and easily killed. Their job is to appear like they are doing something and making some sort of decisions. Their job is mostly to respond to the player, because the game is really about the player.
Currently, a single enemy behaves poorly, and a bunch of them together are just embarrassing (besides spiders holding guns, which is impressive):
They aren’t even shooting here, just standing stuck in path-finding and state switches. For reference, here is a single AI doing its active attacking thing (forever):
One of my problems is enemies being able to see the player but not shoot them. This is annoyingly hard to implement for AIs as I need line of sight checks everywhere, including states for moving around, algorithms to find a spot to shoot from, timings for everything, etc. In short, I’ll sacrifice my bullet “realism” and have the projectiles not hit their own faction:
This makes the whole AI vision logic so much easier. (May be at some point I will readd some behaviour to avoid firing through other mobs, but that will be a “nice to have” feature, not a core restriction.)
I was going to continue work on the mob brains, but I couldn’t quite figure out how to do it. I broke down my states into smaller ones with more switches and fallbacks, as that seems to be less convoluted than a state that does a lot of things at ones. For example, the “sighting” state just decides how to react to seeing a player:
From there it could go to the actual states that deal with the specific behaviour. But the problem is adding random stuff to this or somehow switching between the states. The brain FSM has the unfortunate consequence of having its concrete states and switches too rigid and pre-determined. I really need to do more random enemy behaviours and I realize I cannot get away with just the brain FSM I have. So my current idea is to use a very simple belief-desire-intention implementation.
My AIs believe that certain things happen and this furthers their desires to do certain actions — that is, choose an intention. (Technically, my “beliefs” are actually “events” in BDI terms. And my intentions pretty much match my desires.) So instead of my current brain logic, I am making an AI framework with (1) belief sensor, (2) desire selector, (3) intention selector and (4) intention actuator. I have properties for each of the sub-systems and enemies can reference the ones they want:
Let’s start with the (1) belief sensor. This processes external signals for each potential desire. My asset should hint at what I’m trying to do here:
Of course, I can’t really select anything reasonably without some sort of weights. So I define values for each belief as to “how much” they affect the desire. Then for each desire, I keep a score of 0-100. I can define these in my asset:
Beliefs can be continuous (something that happens all the time, like being able to see the player) or discrete (something that happens once, like weapon running out bullet). There is a “default-always” belief, which processed if no other belief occurs. And in principle the values work:
By the time I got to the AIs in the first hub, they all were really desiring to move around. (I also love the fact that I can test a single sub-system without having written anything for the others.) Before I can make the enemy do the desire, I need to choose which one. This will be slightly more complex than just choosing the highest score, so I am making a separate (2) desire selector. But for now I can go with the highest values and, again, in principle the selection works:
Next I can make the AI convert the most-wanted desire to the intention. For now my desires and intentions are very simple, and they basically correspond 1:1. The only caveat is that there is no “idle” desire and so no default desire that can be performed. So I need to have a separate (3) intention selector that knows which desires require which intentions:
And this also works, in principle:
And finally, I can make the enemy do what the current intention is. Firing at the player is easy, just shoot — all AIs are already doing this constantly. Moving around is a bit more complex. First, I have to choose a position to go to, then find a path to it and then follow the path. And in the end, this works in practice now (sort of):
The enemy is spazzing out because all my values are bogus approximations and I have no thresholds or margins to anything. So now I can start tweaking the values, experimenting and adding new stuff. First thing is to add some thresholds:
Desires can only be selected if they have at least some minimum score — this avoid some bad initial desire selection from 0. This means there can be no desire, so the intention will default to Idle. And desires can be switched only if they pass each other sufficiently — thi avoids quickly switching between desires.
I also need to remove (for now) any triggers/beliefs that are constantly increasing any score when the player isn’t around:
And with some value tweaks, this actually works much better than my brain did:
Their movement is pretty erratic though. I already added some basic validation rules to select a random free nearby location in sight of the player. But I need better logic and more parameters I can tweak live, for example:
This is one of those features that doesn’t have real design. I just keep adding rules and checks, but at the end of the day I’m just fiddling with numbers until it looks good. In fact, everything interacts with everything, so any value could have a cascading effect. For example, change enemy move speed or enemy weapon fire rate and all the values go weird. Thankfully, it’s quick to iterate.
One thing I need to avoid is intentions expiring quickly even when AIs really “want” to switch:
In other words, I don’t want the enemies to hop between actions too quickly once they decide to do something. I am also adding an overall randomness value that applies to all values and timings that de-synchronizes any AI actions that might coincide between enemies:
All of this already produces fairly decent results:
At this point I’m a at “good enough” point for immediate logic. There are a lot of tweaks I can think of by just looking at the enemies. For example, they tend to bunch up randomly. I can prevent this by disallowing coordinates with too many nearby enemies:
Anyway, now that I have my new AI framework, I can work on expanding it. There is still a lot to cover here and I will be adding new desires soon and covering that in next posts.