—    —  Home

   JLJac on July 24, 2014, 01:43:17 AM:

Yup, thanks! I hope we get some more material to flesh out the page with later~

Do you feel like sharing any of the ideas you had buzzing around your head?

uh guys, i think he meant "ideas about how to implement abstracted creature behaviour" hahah
http://gamasutra.com/blogs/ChrisSimpson/20140717/221339/Behavior_trees_for_AI_How_they_work.php

edit:
Rewinding the thread I'm not so sure it's the answer to that question Who, Me? however still worth looking at Who, Me?
Not the current problem, but a super interesting read. As always, thank you so much for your input! I'll let this one brew for a bit and see how I can utilize it, I'm sure it will we very usable.

Update 276
These last days I've basically spent in angst about how to implement the creature abstraction. The problem is that this is such a fundamental part of the infrastructure - I can't really throw something together and then get back to it.

Yesterday I had a talk with James, and I think I've been able to take a step back from the problem. This morning I had an idea. In the end, it seems unnecessary to do a full-scale AI for creatures that are offloaded. It kind of beats the purpose of offloading them, right? With that in consideration, what is the only thing an offloaded creature can actually do that will be able to affect you in any way? There is only one action a creature in another room is able to perform that's going to influence you. That's exiting the room - an action that has the potential to take it to your room.

So the idea would be this: Separate all the actions that include room traversal into their own category. Only these will be able to occur in abstract mode.

At this point, the list is pretty short:
  * Random scavenging/idle milling about
  * Returning to den as rain is approaching

To this comes a few events that could possibly be added in the future.
  * Returning to den because of injury
  * Returning to den to store food
  * Going into adjacent room to investigate input from inter-room senses, such as inter-room hearing (to what extent this will be implemented is a game balance question)

So, the solution would be to strip off-screen creatures of any behaviors except these few.

But wait, doesn't this mean that the whole "living world" "terrarium" "creature autonomy" thing is thrown out the window? Doesn't this make the entire world become an illusion that's created in a radius around the player?

Not exactly. The idea is that in the same way as position etc is abstracted out there in the big world you're not looking at, AI is too. Creature interactions can still happen, but they (including the AI part)are abstracted to simple statistics. So say that a pup and a lizard is occupying the same node in abstracted space. Instead of a huge process of interaction between two AI's with path finders, tiles, physics, etc etc etc, the whole thing is reduced to "4% risk of eating per tick while they occupy the same node". By simulating stuff in statistics like this, stuff will still be able to happen off-screen, but not simulated in the same granularity.

Going to try an implementation this afternoon, and hopefully things will get moving again!





   JLJac on July 24, 2014, 07:36:12 AM:

Similarly to how you wouldn't have a baseball cap in a fantasy game because it's not fitting the setting, neither caps nor viking helmets are really true to the RW setting. That said, we do plan on having a variety of playable characters available, but we don't yet know if that's going to be a question of custom combinations of features or if it's just going to be a selection of characters to choose from. It's all much later stuff though, for now the general frame work needs to get done!





   jamesprimate on July 24, 2014, 09:12:46 AM (Last Edit: July 24, 2014, 09:22:38 AM):

well also this is sort of long-term thinking, since we should be able to have plenty of non-abstracted creatures hanging around off-screen in the general area too (possibly even for entire regions.) its more about finding some scalable solution for when the world starts to get big and complicated.

figuring out at what point/distance/etc they become abstracted will depend on a bunch of factors down the line, but for now we need a structure that will work either way.





   JLJac on July 24, 2014, 11:31:25 AM:

@whiteclaws, that first post is a pretty flawless description of what's going on, actually Smiley

@JIMBERT, yeah, there's a bit of that too. For example I intend to make it so that abstracted rooms can receive update calls at uneven intervals (perhaps depending on how far away they are) - in either case they might very well not update every frame. The longer ago it was a creature was updated, the more "quantum" it gets in a way - when it receives its update it will have correspondingly higher chances of doing things, and its "time since my last update" counter will be reset, pinning it down to a "less quantum" state again.

The pretty good news is that I seem to have escaped the stuck place I've been hanging out all week. I threw together a little visualizer for the abstracted world so I could actually see the creatures moving about, and it naturally helped a lot. Now I think I've got an angle of approach, and this ride is moving again!  Hand Shake Left  Grin Hand Shake Right





   jamesprimate on July 24, 2014, 11:53:10 AM:

YAAAAAAAAAAY  Hand Clap





   JLJac on July 25, 2014, 07:47:18 AM:

Update 277
Moving forward at a steady pace again! Today I managed to implement quite a few things, albeit small. Rooms can for the first time be unloaded to abstract space again. I have managed to get the object permanence of creatures pretty much down - I'm unsure of how it happens when a room is offloaded as the creature is currently in a a shortcut, but I have the following scenarios covered:

* The creature comes from abstracted space, entering realized space - and becomes realized
* The creature exists in abstracted space, which is then realized - and so is the creature
* The creature leaves realized space, entering abstracted space - and becomes abstracted
* The creature is in realized space and is abstracted along with it

This is pretty cool. I can have a lizard move about at random (random movement in abstracted space is halfway done, not very fine-tuned yet) and at times it will enter a room I'm in, at other times I might enter a room where it is. Cool stuff!

Another cool thing I've managed is to summon a lizard that's out in abstract space to the current room - it actually does show up in the doorway!

A loooot of things still need polishing. The abstract path finding stuff isn't at all hooked up with the in-level pathing yet, making it so that as soon as the creature switches between the two modes it forgets where it was going. Sometimes there are errors and weird glitching around. Creatures are not spawned in the middle of rooms as if they were actually doing something when entering previously abstract space, but rather at exits with a confused look on their faces. The list goes on. But so do I! It feels wonderful to have solid progress again.

Happy weekend everyone!





   JLJac on July 28, 2014, 05:57:58 AM:

Update 278
My little test creature has picked up the good habit of continuous existence throughout rooms and abstracted node space. It is getting closer and closer to achieving an AI capable of handling these strange situations as well.

When realized, the creature has this big cool path finder that can do all sorts of stuff. When abstracted, the creature just has a path, which is nothing more than an array of coordinates in node space, which it will mindlessly follow if it can. Today I made it so that on abstraction, the real cool path finder will try to create a super simple path and feed that to the super simple abstract AI if it can. If it's unable to, the abstract AI can fall back on the abstract space path finder, but I think that's not preferable as I don't see the reason why a path should be calculated twice.

This seems to work just fine. I've had the realized creature find a path in realized space where it needs to pass through abstracted space in order to get there. Upon abstraction, the path is passed down just as intended. This means that now creatures can become abstracted without losing their focus; if a lizard sees you, it will be able to calculate a huge detour through abstract space in order to get to you, and follow it all the way. On arrival, it will even be able to remember what it was doing, as a creature's proper AI is saved in a hibernation mode if the creature has a destination within realized space.

Next up - if an abstract room is realized as a creature is halfway between two cells, the path should be calculated, and the creature should appear in the middle of it.





   JLJac on July 29, 2014, 08:52:39 AM:

Abstract interactions would definitely be way easier to implement than actual realized interactions, so I'm probably going to implement quite a lot of them just for the hell of it. I'd post pictures, but you change the abstraction by measuring it. Actually you don't, here's my visualization of it:



That's the map of the world, each column is a room. Each square is a node in that room. The rooms that have a little pink minus above them are realized. The one with the grey rectangle behind it is the one currently viewed. You see how the nodes have different colors? That's how I've visualized the connections between rooms - the nodes in different rooms with the same colors are connected. Little orange dots are creatures, the slugcat and a lizard prototype hanging out in the other room.

Slugcat is in node 2, because that's the door it came in through when entering the room. Checking which door a creature is actually the closest to in a realized room is a pretty expensive connectivity check, so I'd rather not do that every frame, and for now I haven't really come up with any scenario where assuming a creature is still hanging out in its entrance node isn't working. If such a scenario would arise though, it would be easy to just have a method call that tells every creature in a room to check against the exits and update the node map accordingly.

As you can see there would have to be hundreds of rooms before the node graph became bigger than the grid of one single room, so I feel pretty confident that the abstraction is going to be a cool feature. It will allow a vast world to exist, and to gain granularity as you get closer, making it feel like the entire thing is alive simultaneously.





   jamesprimate on July 29, 2014, 03:02:04 PM:

this looks like a really elegant solution! easy to visualize and seems like it has deep application.





   JLJac on July 30, 2014, 11:43:52 AM:

Update 279
Big progress today! Finally a lot of lose ends came together. Now we have abstracted creatures working like I intended them to. The best way to do this is probably a list.

* When an abstract room is realized and there are creatures in it, the creatures will appear in the room in positions suitable to where they are headed. So say that a creature is moving from one exit to another in the room, and has been doing so for 40 ticks when you get in there and see it. A path will be calculated between the exits, and the creature placed at a corresponding distance along the path. This path will also be fed to the path finder, selecting all the cells so that the creature will follow the path immediately from frame 1 as the room is realized.

* If a creature is abstracted in a room, and doesn't leave it, its actual tile position will be saved, and used as a starting point for a path as described above. This means that you can abstractize a creature half-way in following a path, and then realize it a little while later, to see it having traversed some of that distance.

* If a creature is not following a path, but is realized in a room, it will walk around between to that creature accessible tiles randomly for a corresponding amount of repeats, using the creature's last known position as a starting point. This is meant to place the creature somewhere in an area connected to where it was last seen, further away the longer ago you saw it.

* If a creature is abstracted while outside of walkable terrain, it will be marked as stuck and unable to move in abstracted mode.

* On abstraction, creatures save a simple manhattan distance to the closest exit, the one which will be its node in node space. The creature won't be able to move away from that node until it has been in it a time that corresponds to that distance. This is so that a creature that was half a screen away from an exit won't be able to pop through it the very next frame after abstraction because the engine thought it was "in that node".

* Creatures can be assigned destinations in other rooms, and will move towards them. Every combination of destination in abstract/realized space and creature in abstract/realized space will compute, and the creature is able to pursue a path that goes through both realized and abstracted space. Theoretically it should even be able to traverse a room that's flickering between realized and abstract, though that's an extreme case that will most likely never occur in the game.

I have also been able to do just a tiny bit of performance testing, and it looks very promising. These are for example ~20 creatures passing through the room, and the framerate seems to handle it well.


(The gif though is captured in 10fps, so you'll have to take my word for it  Tongue )

The whole solution is still throwing a lot of nullreference exceptions and the like, but the general structure seems to be working and I'm excited to squash the last bugs and pack up and move on. Maybe soon I can finally get to actual AI!





   JLJac on July 31, 2014, 01:29:23 AM:

Update 280
Hehe wow - when you've paved the road with robust solutions it's amazing how quickly everything comes together in the end. Now I have the path finding and creature abstraction done, it seems! It isn't throwing errors at me any more! And way too soon, because I haven't had any time to think about which item to move on to  Shocked

What I should do is to adapt this whole path finding/abstract space system to accommodate for creature dens as well, so that creatures will be able to go hibernate and the like. But frankly I'm a bit bored by this stuff now, and the dens can wait a little while.

The next thing that seems reasonable to get to is some kind of generic AI system. That would be the very basics of an AI, the core that's shared between all creatures. One thing I know this system should incorporate is the "brain ghost" system, where creatures upon seeing other creatures will create a symbol for that creature that can move around with slight autonomy even after the creature is no longer seen.

This is the core mechanic that makes it possible to trick the enemies in Rain World. If a lizard sees you, it will create a ghost of you, and as long as you're in its field of vision that ghost will always be fixed at your position. But the moment you get behind a corner, the ghost will start to move based on the lizard's assumptions of how you'd move (based on things such as movement direction on last visual) and this is your chance. The lizard AI is only ever able to ask for the ghosts position, never the actual player's. This means that if you're diverging from the path the lizard assume you'd take, you have a chance to trick it.

This system is going to be common for all creatures, so that's something I could get at right away. The problem is that I have a slightly more complex environment this time compared to the lingo build - for example it's not possible to know how many creatures are going to be in the room and needing ghosts this time around, as creatures are allowed to move freely. Nothing that can't be helped by an hour or two in the thinking chair.



All of this is just "data collection" though, the system that will provide the AI with the information on which it'll base its decisions. The decision making process itself is an entirely different matter.

A Tree solution was posted a page or two back, and I really liked it. But I've also been thinking about a Pushdown Automata Finite State Machine kind of solution, where different modes are defined as behavior modules that can be stacked on top of each other. This would basically be a list of tasks, where the topmost item would be handled until it reached a success or failure, in which case the next one would take over. The difference from an ordinary to-do list is that the modules would be self-sufficient classes actually executing the behavior themselves, and also saving their state in case they got postponed, allowing them to pick up where they left off.

In a way, you could say that the Pushdown Automata is a subsystem of the Tree, as it could be created within the tree if you used a Sequence node and allowed it to be dynamically change its "playlist" during play...

While I'm rambling, one thing that I don't quite like with the Tree is that it has the AI character actively try every solution before moving on to the next one. If there's a lot of walking involved, this could mean that the character could spend several minutes executing some chain of actions where it's very obvious that the last one won't carry through.


(Images nicked from here)

Say that the "open door" node will return a fail. The NPC will still have walked to the door which might not be a big problem in a small world with fast-paced gameplay. But in a large world Frodo and Sam might set out on their epic pilgrimage for three movies just to arrive at mt. doom and face the fact that they left their mountain climbing shoes back home, and that's pretty stupid. Potentially you could circumvent this problem by first running through the tree with an "assumption" cursor, which will make an educated guess on whether or not each action in the sequence will succeed, and only do the actual run-through if that's the case. This sort of defeats the purpose of the architecture though ...

Another thing to take into consideration is that the tasks the RW AI will handle looks very little like this:



Rain World contains extremely few puzzle-like situations where items are needed to traverse obstacles and the like. This layout is perfect for complex puzzles where several interactions are each unlocking the other until a final goal is reached.

The problems in Rain World are much more... fluid than this. Each creature is pretty much always free to move to wherever it wants without having to collect any keys or hit any switches. The only things that can restrain movement is being speared to a wall or held by another creature, neither of which there is really anything to do about except wiggle and squirm.

The Tree solution seems ideal for overcoming geographical obstacles in order to obtain items that can help overcome further geographical obstacles. RW has very little of both these elements, and for NPS's, almost none.

So what does a Rain World creature need to think about? It needs to weight many options, none of which are simply "possible" or "impossible". It needs to do this with limited information as well, being able to account for uncertainty. I imagine a typical rain world lizard problem something like this:

"I currently see no slugcats. I have seen two on this level though (Ghosts are still in memory). One I saw 240 ticks ago in a position 30 tiles removed from me. The other one I saw 20 ticks ago just 10 tiles away, but it was holding a spear. Which one do I go after?"

Another might be:

"Room A have three edible creatures in it, but also a creature that considers me as edible. Room B has just one edible creature in it, but my own skin would be safe. Which one do I go in?"

My immediate idea is to somehow construct "plans" for what to do, and for each plan calculate a "good idea" value based on the known information.

Quote
Plan: Follow slugcat A! [Slugcat deliciousness: +40pts] [Distance to target: 20 tiles -> -20pts] [Time since I saw this slugcat: Currently looking at it! -> +30pts] [Time till rain: Starting to feel a little uneasy about the rain -> -10pts] Total points: 40pts

Plan: Follow slugcat B! [Slugcat deliciousness: +40pts] [Distance to target: 10 tiles -> -10pts] [Time since I saw this slugcat: 100 ticks -> -10pts] [Time till rain: Starting to feel a little uneasy about the rain -> -10pts]  Total points: 10pts

Plan: Go home to den! [Time till rain: Starting to feel a little uneasy about the rain -> +10pts]  Total points: 10pts

Decision: Follow slugcat A!

As the rain got closer, the "go home" option would appear more and more attractive until it became the winner of the evaluation. The main problem I can see with a system like this is that it could lead to flickering back and forth between behaviors without actually carrying any of them through. This could perhaps be circumvented by having the evaluation be delayed for a little while after a decision has been made, but that in turn would make the creature look slow to react in some situations. Maybe the evaluation could be tied to some event...

What do you think? If you guys want to give me some reading on AI that would be much appreciated! (Looking at you, Gimym JIMBERT)





   jamesprimate on July 31, 2014, 09:15:09 AM:

Quote
Plan: Follow slugcat A! [Slugcat deliciousness: +40pts] [Distance to target: 20 tiles -> -20pts] [Time since I saw this slugcat: Currently looking at it! -> +30pts] [Time till rain: Starting to feel a little uneasy about the rain -> -10pts] Total points: 40pts

Plan: Follow slugcat B! [Slugcat deliciousness: +40pts] [Distance to target: 10 tiles -> -10pts] [Time since I saw this slugcat: 100 ticks -> -10pts] [Time till rain: Starting to feel a little uneasy about the rain -> -10pts]  Total points: 10pts

Plan: Go home to den! [Time till rain: Starting to feel a little uneasy about the rain -> +10pts]  Total points: 10pts

Decision: Follow slugcat A!

^^^^ I love this so much. THAT is how AI should work, imho.

From a outsiders perspective, it's hard not to think of bumbling FPS enemies when talking about Behavior Tree stuff, where you can almost hear the CLUNK as it changes gears, mechanically going through it's list of activities. Not terribly life-like, and super easy to predict. But perhaps as dancing_dead says, that could be the fault of the designer not the framework.





   JLJac on August 01, 2014, 02:52:21 AM:

Thanks guys, awesome input! All of your takes on the issue are really valuable to me, and it's good to see that you have some different angles on the problem. I'm taking it all in.

Update 281
I took a little step back and looked at the big picture. After spending some time with my notebook and a pen rather than staring at these pixels I managed to pin a few key points down.

First of all, the core question and its (as of now) three answers:

What is the purpose of the AI in Rain World?
 * Challenge
 * Flavor
 * Trickability


Challenge - The AI should add to the challenge to keep you at the edge of your seat while playing. However, I'm going to dismiss this point entirely, because I have a million other parameters that affect challenge and are much easier to tune. The movement speed of the enemy, the range of its senses, the amount of enemies are all such parameters.

Flavor - The whole purpose of the project is to simulate a world that feels exotic and alive. Creatures' behaviors are a huge part of that. I'm not too concerned about this one either though - I trust my gut feeling when it comes to this aspect, and I think most of it is in the details (animation etc) rather than the overarching architecture. In short, I'm not worried about this aspect, it's going to be cool no matter what system I use.

Trickability - This is the thing - the problem that needs to be solved. The idea is that you want the AI to be smart enough so that the player can trick it and get satisfaction out of having outsmarted it. When it comes to Rain World AI, this is the holy grail I'm pursuing. Every amount of complexity on the AI's part should generally fall back on this; this is why the AI is complex. An NPC that just moves towards a target on visual contact isn't smart enough to be tricked. RW AI needs to be smart enough to come up with a simple plan and carry it through, so that you can have anticipated that simple plan and act accordingly.



I've identified a few ways to achieve trickability:

Communication/Clarity - It needs to be clear what the intent and purpose of the AI character is. You need to know what it's currently up to, and if it changes its behavior, that needs to be clear too. Some of this can be communicated by animation and sound, and is thus not really an AI concern. The simplest, solidest solution for this problem would be to visualize the state of the AI with HUD somehow. For example I've seen some splinter cell game where a ghost of the player lingers where the enemy believe you to be. You could also have eye beams representing visual contact etc. I don't like these solutions, as I try to minimize HUD in general, and because it feels like cheating to see what's going on inside the enemy's head. What you look at on the screen should just be the physical reality of the game world - you are in fact already cheating by being able to see through walls, which no other creature can. So this one is going to be an issue. One thing I've been thinking about is that flickering between behaviors should really be avoided in order to achieve this. The creature needs to commit to an action in order for the player to be able to see what it's up to.

Predictability - It's crucial that the behavior is predictable in order for your cunning plans to follow through. You should be able to know what each enemy is going to do in each situation once you've gotten to know them. If they are too predictable they might appear as soulless machines, but I think I know of a way to fix that. If their idle behaviors, when they are not pursuing or being pursued, is more random, that will bring a little randomization into the mix without hurting the predictability of the actual gameplay-relevant behaviors. In their idle states they will also move slower with less urgency, giving the player some time to react to an unpredictable move.

Sufficient complexity - As stated, a creature that is too dumb can't be tricked. A box that just moves towards you can be made to fall down a pit, but you don't really feel like you've outsmarted anything. If the box tries to move around the pit, but you're one step ahead and have already prepared an ambush or something along that route, you suddenly have a war of minds (albeit a very uneven one) and that's much more fun. However, if the AI is too smart, you will just spot it as it's traversing your room on its two hour quest to collect parts for building a rocket or something, and when you don't have the ability to ask it what its ultimate goal is, it's just going to appear as if it's moving about at random. So, more complex isn't per definition better - especially if the complexity gets way ahead of item #1, the communication aspect.

Interactions - There needs to be tools at hand when tricking the enemy. In the old build, a creature could pretty much only move about and eat or get eaten. There needs to be a few more interactions to achieve trickability. Something to lure creatures to a specific location. Something to stun them. Some ability, such as armor or the like, that they can be robbed of through a cunning plan. These are not strictly AI issues, as they encompass larger game design choices, but they will have corresponding AI behaviors and might deserve to be mentioned here.



So that's how far I've gotten on my thoughts of why Trickability is my main goal, and how to achieve it. The next page in my notebook has a simple little observation about the kind of environment this AI will live in.

Contrary to in many games, Rain World creatures will encounter extremely few of what I call "Key/Door Puzzles". That is, actions that are locked behind other actions, such as "You can't get to behind the door until you've unlocked the door. You can't unlock the door until you have the key."

The Rain World NPS's are generally animal-type creatures, that don't use much tools or items. The Rain World maps are generally open - they don't have locked doors that can be opened with items or switches. This means that there are extremely few situations where something is possible to do if something else is done first. Categorically, every action is either possible or not possible, always. Either I can get up on that ledge, or I can't. Either I can hunt that prey, or I'm speared to a wall. Among the possible actions, however, there is a wild variation of desirability.

This means that Rain World AI is about

Decision Making, not Problem Solving

This notion has me currently leaning in this direction. But the behavior trees seem pretty awesome too haha! I seriously don't know if anyone made it all the way through this monster of a post, but either way it was good to get my mind sorted out. As always, all your input is very welome!  Smiley





   jamesprimate on August 05, 2014, 04:40:11 AM:

the "Joar is too busy hanging out on an island to bother with an update" mini-update 281.5

So island or no, we took some time and talked about AI archetypes and what to do. The next step of that is developing the AI's "tracker", which plots the expected position of a creature or object for the AI to follow when that creature or object is out of view. Basically, it creates a mental "ghost image" estimation of where it thinks that target will be via some criteria. For example: a lizard chases you around a corner and can no longer see you. Which direction does it take to continue to follow you? THAT'S the tracker. As you can imagine, this has big significance for hiding, sneaking and plotting, AKA a bunch of Rain World.

Joar had 2 options for this: a tile-based method and a physics-based method, each with certain advantages:

* Tile-based seems more robust in general but suffers from a lack of granular resolution (1/20th as accurate as physics method) and difficulty predicting more "physics-y" shapes like parabolas (such as the arc of a jumping slugcat!), that might occur.

* Physics-based method models parabolas and gravity well and also has has the advantage of being able to easily and more accurately model the velocity / vector of an object to "ghost image", but could potentially suffer from "in-between state" jitter and other weirdness.

So the decision was of course to do BOTH: Tile-based tracking for objects on the ground and then physics-based tracking kicks in for jumps (and perhaps flying behaviors.) And that's what he's up to right this moment!   Hand Metal Right

I'm getting assorted things prepared for PAX Prime, where we'll be showing the Rain World alpha again at the Adult Swim booth. Also, I don't talk about this too much in here, but the Rain World soundtrack is getting deeeeep: Well over 60 tracks written so far. Not all of those specific ones will be used of course, but the way things are mapped out now, 70-100 tracks might be where it ends up. Kickstarter backers could wind up with a nice 3-4 disc set. Quite a bargain! XD





   JLJac on August 05, 2014, 08:27:53 AM:

And now, straight from the island in the Swedish countryside!

Update 282
After talking a little with James, we have started to pin down the AI stuff some. Still I feel it might need some digestion, and have instead started on the next (and last) module that will be shared by almost all AI creatures - the Tracker.

The Tracker module is basically a piece of code that makes creatures be not omniscient. It creates ghosts for each other creature that the creature knows of, and acts as a combination of visual processing unit, memory and anticipation/prognosis intelligence.

As James described, the Tracker runs a ghost for each other creature. When the other creature is clearly visible, the ghost will be updated to its position each frame. When obscured, the ghost will fulfill two functions - in the short term anticipating where the creature might be (it went in through that passage, it ought to pop out on the other side) and in the long term remembering where the creature was last seen.

This is the creature's "internal reflection of the world", it can never ask the game engine directly for info on where creatures are, but always have to go through this sometimes incorrect filter.

This is the Tracker as it stands after one day of work. The debug visualization is a little wonky because the the laser is only displayed when the lizard sees the slugcat's upper body, but the ghost is set to visible mode on seeing the lower body as well.



As you can see the "anticipation of movement" stuff is not really done yet. In simple cases like this, when there is pretty much only one way to go, the ghost should be able to follow a corridor through many twists and turns. This is where I'm planning on having the tile-based movement kick in - it'll move smoothly until it inevitably gets stuck in a corner, then it will start to evaluate its further movement options on a tile-by-tile basis.

Also the ghost should respect what tiles are considered accessible and inaccessible to the creature being emulated.

The basic framework for creating, maintaining and deleting ghosts seem to be working fine though. This system also potentially allows for multiple ghosts tied to the same creature, which could create interesting options where an AI creature could for example evaluate both options at a road fork or the like.





   jamesprimate on August 05, 2014, 12:11:09 PM:

And now, straight from the island in the Swedish countryside!

Update 282
After talking a little with James, we have started to pin down the AI stuff some. Still I feel it might need some digestion, and have instead started on the next (and last) module that will be shared by almost all AI creatures - the Tracker.

The Tracker module is basically a piece of code that makes creatures be not omniscient. It creates ghosts for each other creature that the creature knows of, and acts as a combination of visual processing unit, memory and anticipation/prognosis intelligence.

As James described, the Tracker runs a ghost for each other creature. When the other creature is clearly visible, the ghost will be updated to its position each frame. When obscured, the ghost will fulfill two functions - in the short term anticipating where the creature might be (it went in through that passage, it ought to pop out on the other side) and in the long term remembering where the creature was last seen.

This is the creature's "internal reflection of the world", it can never ask the game engine directly for info on where creatures are, but always have to go through this sometimes incorrect filter.

This is the Tracker as it stands after one day of work. The debug visualization is a little wonky because the the laser is only displayed when the lizard sees the slugcat's upper body, but the ghost is set to visible mode on seeing the lower body as well.



As you can see the "anticipation of movement" stuff is not really done yet. In simple cases like this, when there is pretty much only one way to go, the ghost should be able to follow a corridor through many twists and turns. This is where I'm planning on having the tile-based movement kick in - it'll move smoothly until it inevitably gets stuck in a corner, then it will start to evaluate its further movement options on a tile-by-tile basis.

Also the ghost should respect what tiles are considered accessible and inaccessible to the creature being emulated.

The basic framework for creating, maintaining and deleting ghosts seem to be working fine though. This system also potentially allows for multiple ghosts tied to the same creature, which could create interesting options where an AI creature could for example evaluate both options at a road fork or the like.

the gif isnt work for me for some reason. anyone else having this prob? i'd LOOOOOOVE to see it





   jamesprimate on August 05, 2014, 04:05:23 PM (Last Edit: August 05, 2014, 07:14:27 PM):

Image is broken because the link sends us to the site where the image is stored and is actually not the image

Here is the gif


yooo! thanks this is really cool

people are going to see this gif out of context and be like "COOOOL RAIN WORLD HAS LAZZZERS NOW!!!!"

they'll be so disappointed





   JLJac on August 06, 2014, 07:18:20 AM:

My usual upload service wouldn't work for some reason, so I had to use another. Here it is again, anyhow:




Update 283
Inspired by the occupancy grid article, I've started developing a solution with another level of hierarchy. Instead of just the tracker owning a bunch of ghosts, each of which knows which creature it's supposed to be emulating, the tracker now have a list of "creatureRepresentatives", and those have a bunch of ghosts each. When the creature is actually seen, this list is shaven off from the second position, and the very first ghost is put at the creature's position. When it's not seen, the ghosts are allowed to mill about.

I now have some basic ghost movement as well. It takes the allowed movements of the emulated creature into consideration, and moves to nearby tiles. It prefers to go in the same general direction as the creature was last seen moving in. When reaching a road fork it will sometimes split into two.

I'd post a gif of a million ghosts splitting over and over until it looked like mayhem, but I'm in the middle of transitioning to the representative system, so the game won't run ATM. Gifs tomorrow!





   JLJac on August 07, 2014, 06:32:22 AM:

Update 284
When reaching a road fork, where one option is considered ideal and the other slightly less so, a ghost is now able to clone itself. The clone gets a slightly lower score, which is accounted for in the eventual ghost ranking process.

Ghosts travel slowly, as we want creatures to assume that their prey is at least somewhere close to where they lost it. Over a large amount of time however, the ghosts will travel to pretty much all corners of the world, which means that a creature that has been long lost will be looked for in much more diverse locations than one that disappeared around a corner just two seconds ago. Each time a ghost travels, it will check if it is in the line of sight of the creature inside which's head it's living. If so, it'll freeze in place - it wouldn't make sense for the the creature to assume that the prey could have traversed any distance that is actually visible to it.

In this gif (trying yet another hosting service, hope this one will work for everyone!) you can see a (stuck) lizard lose track of a slugcat. Notice how after time has been sped up, the possible locations are scattered all over the lower left area, but also contained within the area which is not visible to the lizard. It knows the slugcat has to be in there somewhere, but becomes less and less sure of exactly where as time goes by. Here the ghost count is capped at 15, I think the actual game would generally use fewer.



When the slugcat is actually seen, all ghosts except one are removed. The ghost that's slightly larger and pulsating is the one that's considered the best guess for where the slugcat might be. This is where the ranking comes into play. The ghost score has several things feeding into it. Traveling in the same general direction as the tracked creature was last seen moving in is one, proximity to the last seen coordinate and the tracking creature are others, intended to make search behaviors look more reasonable. The search behavior will probably be mainly about going after this "best bet" ghost until all of them are gone.

When the lizard is moved close to the ghosts, they disappear. Note that they don't disappear as soon as the lizard is able to see them, but the lizard has to be somewhat close as well. This is for a few different reasons. One is that when the ghost cap is reached, and a ghost reaches a road fork, it won't pick one direction, but instead just freeze in place (seemed more reasonable to me to stop at the crossroads rather than picking an option at random that would take you even further away from where the creature was last spotted). Another is that the ghosts are not in every tile in the possible area, like an actual Occupancy Probability Grid. Both of these are generally the same concept; that these ghost coordinates are not strictly "places where I believe the creature to be", but rather "places that would be a good idea to go when searching for this creature" if you get the distinction. The latter can't just be dismissed at a glance, but takes some examination before its hypothesis can be falsified.

Many of the parameters for this whole thing can be customized, such as the max amount of ghosts per tracked creature, the speed of the ghosts (as a fraction of the tracked creature's average speed - I imagine I'll generally want the ghosts to move slower than the actual creatures as a conservative guess that's closer to the last seen coordinate is better than a wild guess) etc.

Ideally this system should also be inter-room, so that a ghost could pass through a shortcut into a neighboring room  - this I haven't implemented yet. But generally I'm happy with the system. I will however add some functionality to dumb it down to work better for the slightly thicker creatures. An example would be that a dumb green lizard probably shouldn't be able to be too clever about how you're able to climb and it's not. Instead it should assume that you have the same limitations it has, and emulate your ghosts with its own terrain preferences. That way when you climb up and away from the area it's searching, the proper reaction should be confusion.





   JLJac on August 08, 2014, 02:16:25 AM:

Yep, it's assumed that the creatures have full knowledge of the map layouts. There are a couple of reasons for this. The player has access to the mythical third dimension, allowing them to view the entire room simultaneously, and I think seeing other creatures through walls is quite enough of a head start. Also it kind of makes sense - as the environments are static - for the creatures to know their premises. And, making AI, path finder and tracker account for probability of unknown tiles is a huuuuge pain programming-wise, that would most likely have no pay off in the gameplay aspect except making the creatures behave stupidly and unpredictably. Saving which tiles are discovered would consume massive amounts of memory as well. So, I've decided that the creatures are always aware of the terrain layout, though they are not omniscient as to where other creatures currently are on the maps.