Learning From a Naive Approach to AI



I've had this idea for a Pokemon-like game, but with more focus on developing a relationship with your monster friend. To implement this, the game would require some pretty complex AI. So I decided to dip my toe into it and develop a "proof-of-concept", so I could understand the hard parts better and better determine if this idea was achievable.

Requirements

I established a minimal set of requirements for my game - just enough that it could demonstrate the basic game loop I wanted in my game:

  • A creature (a wombat in this case) who is initially afraid of the player and hides when the player approaches
  • The player earns the trust of the wombat with an offering (food)
  • Once the player has earned the trust of the wombat, the wombat follows the player
  • Now that the wombat trusts the player, the wombat collects items that the player cannot reach and brings them to the player

My Naive Approach

I purposefully refrained from researching AI models that would fit my requirements because:

  1. I didn't understand my AI requirements well enough to meet the game requirements at the outset.
  2. I believe that naively approaching a problem is a good way to better understand the problem, so you can learn enough to ask good questions & conduct better research.

I had prior knowledge of state machines and I know they're a common framework for implementing AI, so I started there.

My source code (a Godot project) can be found here. At a high-level, the pseudocode for the wombat AI looks something like this:

var current_state
# Main function called every frame
func _process():
    var next_state := _determine_next_state();     
    _transition_to_state(current_state, next_state);
    current_state = next_state;     
    _decide_actions_for_state();     
    _actuate();    
  1. _determine_next_state() does some calculations about the World and uses the calculations in tandem with other mutable & immutable variables on the wombat to determine the state for this frame.
  2. _transition_to_state(current_state, next_state) doesn't have any AI logic - it simply checks the State transition that's occurring and does some extra setup accordingly (e.g. change the layer of the wombat when it burrows).
  3. current_state = next_state finalizes & the State transition.
  4. _decide_actions_for_state() sets variables that inform how the wombat should act (e.g. velocity).
  5. _actuate() checks these variables and does the lifting to make the behavior happen in the game (e.g. move the wombat according to its velocity).

The bulk of the AI logic is in the _determine_next_state() function, so let's dive into its pseudocode:

// Immutable - this is innate to the wombat
var final fear_distance_multiplier;
// Mutable - these change depending on the wombat's relationship with the player
var fear;
var love;
var player_distance_preference;
func _determine_next_state() -> State:
    var distance_from_player := player.global_position.distance_to(self.global_position);
    var closest_food := find_closest_food();    
    var closest_gold := find_closest_gold();
    match current_state:
        State.IDLE:
            if (fear > 0.00 && distance_from_player < fear * fear_distance_multiplier):
                return State.DIGGING_DOWN;
            if (closest_food):
                return State.SEEKING_FOOD;
            if (closest_gold):
                return State.SEEKING_GOLD;
            if love > 0.00 && distance_from_player > player_distance_preference:
                return State.FOLLOWING_PLAYER;
            return State.IDLE;
        ... // Similar types of logic for every state
    
    return State.IDLE;

First, the _determine_next_state() function does some calculations about the "World" so the wombat can decide what the next state should be (e.g. distance_from_player, closest_food, closest_gold). Mostly it's checking how far away the player, food, and buried gold are.

Then, the wombat AI goes through a boolean decision tree (dependent on the current_state) to decide the next state. The decision tree uses the World calculations (e.g. distance_from_player, closest_food, closest_gold) in combination with some variables that reflect the wombat's level of trust in the player and innate tendencies (e.g. fear, love, player_distance_preference, fear_distance_multiplier).

Limitations of My Naive Approach

On the surface, the code was able to meet the game's requirements. However, that wasn't without some hardcoding that I don't think would scale well. I've identified two major gaps in this AI model for my game requirements:

  1. Lack of a framework for Learning - You might've noticed that none of the pseudocode I went through above covers how the wombat learns to trust the player. That's because I hacked it into different parts of the code - for instance, the piece of code responsible for making the food disappear when the wombat eats it is also the code that increments the wombat's love variable and decrements the fear variable. This model doesn't account for a consistent way of learning - the variables used for decision-making and learning are adjusted via hardcoding which would become difficult to manage as I add in more features. I don't see this current approach as being a scalable one
  2. Decisions only consider the current state of the World- The wombat is only looking at the current state of the World when making decisions. However, that limits the decision-making and the AI could be more flexible if it could consider other factors, including "actions" taken by the player and wombat, as well as the historical state of the World.

Conclusion

To solve these limitations, I'm thinking about logging and tracking a stream of Actions and Decisions, then basing Decisions on that stream in tandem with the World state. The AI can also use that stream for Learning: when there's an Outcome (a "positive" or "negative" event such as eating food), the AI uses the stream of Actions & Decisions to Learn and adjust the variables in the Decision algorithm. However, first I'll do my homework and research what the experts have to say about approaches to "learning" in game AI.

If you're interested in trying out the POC, check it out here.

Files

wombat_buddy.zip Play in browser
Apr 30, 2022

Leave a comment

Log in with itch.io to leave a comment.