Artificial intelligence in games (Dan's series of articles)

Sieges in a RPG computer game work usually this way. First, there is a cinematics showing how the castle walls were overcome, then a playable sequence where the player runs around the castle and kills few enemies, finally a boss fight with the king. Enemies appear in waves and rush at the player. They never retreat. If they do retreat or surrender, it is again a scripted cutscene which is triggered when the player kills enough enemies and advances far enough.

From what we’ve heard until now, I expect KC:D to follow this pattern, maybe with a few deviations. We may be for example allowed to shoot from a trebuchet a couple of times before the “get over the castle walls” cinematics starts.

Think, if the player is not unrealistically powerful, the principle should work well in battles. We will see …

so you think there’s going to be something like a boss fight with the king in this game. please read the faqs again.

There is sure going to be a boss fight, I don’t really care with whom, that’s just a plot detail.

if the player isn’t unrealistically powerful then most bigger battles consist mostly out of reloading. :wink:

1 Like

Hey Guys.
Nice to see that some people like to discuss these topics…

I wanted to share some views on this with you, just some little thing here and there.

(There are some talks about these things we made at Cons, Conferences etc, there is a PhD thesis coming along (slooowly… :blush: )

What makes an NPC tick/think
One key component of any NPC is the decision-making mechanism (DMM). You can call it its brain. But basically what it does is “selecting an action to do”. If the NPC is beyond an automaton, it may take various inputs into account (perception, hearing, commands, internal state etc.) - either external or internal. The NPC decides ( If it is rainy, go Home / If it is sunny go to beach). A rigid script is only a text of commands (actually the most simplistic approach).

So basically the NPC is doing the same thing as You are doing - looking at the world and thinking what can it do next. (this is why we dont call our NPCs actors, since they are not reading a script :slight_smile: )

If there are no inputs, its the same, the NPC is just an stupid automaton without reactions (an movie script). The decision making mechanisms vary - e.g. finite state automata, event based scripts, decision trees, behavioral trees, classical planning etc.

There are two big groups of such DMM’s - reactive and deliberative. Where reactive mechanisms only take the current situation into account and deliberative try to look ahead (i.e. plan something). There are various hybrid architectures, but that is rather complicated to get working.

Actually even the Game AI has moved beyond the “rigid” concepts into much more fluid and autonomous ones. It is long time not true that the AI cannot think on its own, since it depends on how you view this trait. The question here is only, how complex the thought process has to be, and how “reasonable”. And how much will we get computational power to do it right :smile: In most cases, games are developed in the "lets not spend that much time on AI lets make it look more shiny :slight_smile: "

Here it is where the “illusion of intelligence” comes into play - we humans actually perceive only the illusion of intelligence by its demonstration that we try to map to our own intelligence as a reference. Thus comes the believable illusion of intelligence. If you look up stuff like The Chinese Room or The Touring Test you will get the picture of what I’m talking about.

About rigid scripts
The above me described “movie scripts” or those by Dan described “player centric design” is one way to approach it, when there is no intention to build actual artificial simulated life. Life is by definition dynamic, it changes, adapts. And that is the most key thing in an game AI - the capability to adapt.

Others
There are various ways to do it, Sims (Affordances of Needs) did it very good, Black And White did it in a really cool way (actual learning), Noone Lives Forever (STRIPS like planning) was superb. We call ours the “Injected Intelligence” ™

How is it done at KC:D
It is rather complicated :wink: We even publish academic papers about it :blush:

http://artemis.ms.mff.cuni.cz/main/tiki-publications.php

To not go into any boring technical detail, we dont do scripts in a traditional way. Our core AI technology is build around the notion of Behavioral Trees that we modified very heavily to serve our dark needs. And by heavily I mean truly heavily. (It is based on If-Then trees from Joanna Bryson and reworked (if someone can dig up my master thesis, you can read up the deep technical details of a lot of our stuff)

Simply, there are several various components to our AI - high level, executional and low level. Low level are mostly those that manage stuff like navigation and pathfinding, NPC2NPC communication, animation stuff, data scopes etc. The execution level is more like a script but not that much - is much more complex, due to its parallel nature (coroutines is the keyword if you like technical details). On the top, there are planning mechanism, either rigid or adaptive, where actually an NPC decides what to do based on its goals, tasks, needs. Finally there is a connectivist level, which provides the NPC world with relations to all what is in the virtual world.

Where is the NPC’s working place? What is the relation to the player? How much does it like to go to a tavern in Samposh? all these questions can be answered rather simply (math magic :innocent: )

And finally we have the Injected Intelligence - imagine it like this - your head is completly hollow and you walk around the world with the intention to do something. You are drawn to stuff (or you just high level plan it), like you want to work, go to a party at the tavern, etc.
Since you have the connections/relations to places and stuff, you just request something that is to your liking (a place of “fun” or example).

But what to do at that place. … since you are hollow, lets fill that void.

Just ask the place “How to do Fun”. Place can tell you executional detail (“Look for a cup, fill it up, drink it”). But how to drink? You dont know … but you want to drink, just ask the cup “how to pick it up and party”. Do you catch the drift? The intelligence is spread across places and things, it is hidden in various places, the NPC just needs to ask. We call it intelligence decomposition and and intelligence injection :sunny:

And to just spark your mind - imagine that the conectivism level is adaptive, that it may change based on actions (either player’s or NPCs) and events. That even you can create new connections, that you can get new connections by talking to other NPCs (e.g. 2 NPCs meet at the tavern and one tells the other about the nice old rich herbalist - aaah lets make a connection there, lets head there some day to talk)

But that is only what can be told :slight_smile: there is much more dark magic beneath the hood. And by dark, I mean truly dark magic.

And just a small AI goof - the AI world sees the Player as a very simplistic NPC, and actually the Player has internal AI that integrates him into the AI world the same way the NPCs do integrate… When you do something in the world as a player, you actually do it as an NPC :imp:

Respects
We have drawn most of our inspiration from Sims, where they solve a similar issue, but a liiitle bit differently.

9 Likes

Artificial intelligence in games #4: Communication problem

#Artificial Communication

Video games are often criticized for being too violent. Perhaps this is true to some extent, however, the question is whether anything can be done with it; can we just replace the conflict, which is the main focus of most games, with something else? What about communication? People do like chit chat, gossip or romantic TV series, so why nobody yet turned the TV show “Surgeries from the Rose Garden” [popular Czech telenovela at the time. note. transl.] into a game? The answer is simple. Because that would require and AI capable of keeping up an interesting conversation and this is a problem. It probably could be done, but it is not easy.

On the internet you can get (click here, it is free) an interesting experimental game Façade, which is trying to do something like that. Old-timers may remember the title Seaman for the Dreamcast, where you could grow a strange creature in a virtual aquarium and then communicated with it via a microphone. The Internet is littered with all kinds of chat bots, various attempts at passing the Turing test, which can be quite a tolerable discussion partner on various topics, and are even able to learn.

Creating an NPC which could be engaged in small talk so the player could get information in a different way than just clicking one of the two offered conversation options seems possible. How would be such game played? Would be the player required to type or speak to the mic? Typing is not fun for the masses, but speaking to a microphone is not without issues either. Speech recognition has made a significant advances, yet it is not an ideal way how to communicate with the game.

Screenshot from the interactive communication game Façade

First of all, in a large family, the player occupying the living room and shouting at the TV, probably would not be received with great enthusiasm. Communication in the opposite direction is a problem as well. Nowadays we are used to NPCs speaking in voices of professional actors; AI that creates the utterances on its own would have to rely on speech synthesis and in the better case it would sound like a GPS navigation. Speech synthesis is not something new, after all. Even Amiga 500 had it in its operating system and the game Valhalla (creators continue making new episodes keeping the original spirit and as a news they are available for PC– more here) all the characters talked only thanks to speech synthesis as early as in 1994.

Finally we have to expect issues with the content itself. If the AI can get stuck moving from one place to the next, what would it be like if it got stuck in a conversation? Using an independent AI in say a detective L.A. Noire would be a step back in almost all aspects. Instead of realistically acting actors and figuring out their mood from facial expression we could freely chat with a GPS navigation, a voice synthesizer without emotions. It would sometimes say complete nonsense or in the middle of discussion ask “Could you please repeat the question? I did not catch what you’ve just said.”

Screenshots from playing the original Amiga version of the first Valhalla game: Valhalla & The Lord of Infinity

#Uncanny valley or The problem with robotic people

What is a game, anyway? Dutch historian Johan Huizing offers this definition: “A game is a free action or occupation which is governed by voluntarily accepted but unexceptionally binding rules, has a goal in itself and brings a feeling of tension and happiness and also the knowledge of being different from everyday life.”

The last part is extremely important. We like to play games precisely because they are games — they are easier than the real life, give us opportunities to become a hero for a while, which we would not get in a real life and to make it all possible, they help us and shield us from the parts that are not fun. People playing LARP know that even the human players of the characters have to behave predictably and simply so that the game develops as everybody wants it to and it is fun.

The improvements in AI bring another phenomenon called “uncanny valley.” This expression comes from robotics, the author is Masahiro Mori and it stands for the moment when the resemblance of human appearance in robots achieves almost perfection. Instead of being excited, people become repulsed, though, because they stop regarding the replica by robot standards and start to expect it to look like and behave like the original, human. Only after the uncanny valley is traversed and the copy is truly indistinguishable from the original, people will accept the robot.

#Turing test performed by AI

It is not true only for robotics, but for most of human activities that aim to create a replica, including computer graphics or artificial intelligence. A stylised drawing by Josef Lada is more pleasing than a daub trying to resemble a photography. Game graphics is approaching this phase and AI will get there too.

Untill we keep in mind that the NPC is JUST an NPC made of polygons which cannot think and it is there just to show us something, it is all ok. The moment it starts to look too real and starts to think on its own, we will start finding mistakes. It has strange teeth, the face muscles are not moving right, it says complete nonsense, we don’t like the articulation… it is a rubber idiot.

One of the older films by David Cronenberg, Existenz, is a beautiful projection what may games become one day and what it might lead to. The heroes enter a flawless virtual reality and watch how eerie it looks like when the NPC’s AI breaks down if the NPC looks indistinguishably humain or how disgusting it is to shot someones head off in reality, when it is not a stylized visuals on the screen, but the blood is splashing right onto you.

Nice uncanny valley demonstration is the well know 2006 E3 demo The Casting which the developers from Quantic Dream later extended into a nice adventure Heavy Rain. Back then we thought it is amazing, but nowadays even normal NPCs during gameplay look better, not just in cutscenes.

#Thought for the day

And what is the thought of ​​Father Fura? AI is not just about performance and the advent of new hardware is definitely not automatically going to give us games with super realistically behaving NPCs. When these will come, we will realize that this is not what we want, that we prefer the stylized idiots who say only what they need to say and then disappear.

The progress will not stop of course. The challenges are not so much in hardware performance, it is mostly game design problems how to implement intelligent NPCs in game, how to communicate with them and the development in associated fields like speech synthesis. I dare say that’s all gonna take quite a while. The next generation games will probably give use NPCs who may behave slightly more realistic in combat and there may be more of them at one moment in the scene than before. To deliver that you don’t need ingenious programmers, but rather a good design and powerful hardware, because it is not actually AI in the true sense of the word.

The era of super mega ultra realistic AI will probably only start when we dispose with screens and the computer will be “projecting” virtual reality directly into our brains. It may sound like science fiction, but I believe that we have a real chance to see that in the next twenty years. I’m sure it will be amazing, but the question is, whether or not we end as people in the Matrix.

If you started reading this series of articles about AI in games only now, definitely be sure to read the first, second, third, and then again this fourth part.

2 Likes

Another reason probably is that the word “actor” in computing is already taken to mean something else

In comments under the original article there was much talk about this part. People mostly disliked this last section of the article. I think that the idea should’ve gotten more space and be properly introduced.

The way I see it, the difference between books, films and games is this. Moving from books to films, the mantra among creators is “show, don’t tell”. Moving from films to video games, we say “don’t show it, let the player do it”. Writing Lord of the Rings the Book was fairly inexpensive. There was just one guy doing it. Well, he spent decades on it, but he was also doing other things meanwhile. Filming Lord of the Rings the Movie cost a lot more. This is because a lot more has to be delivered to the viewer, moving pictures, instead of words and ideas. The ultimate goal in making good looking and cheap video games is if we could get the player’s brain to work out the graphics for us from minimal inputs. Put in a “book” and let the player see the “movie”, without involving the usual movie production costs.

Being on a meager laptop running Linux, I have seen Alpha only on youtube videos and know as much as I have read here on the forum.

I really like the concept you are describing, but from what I’ve read around here it seems that just the few NPCs in the tiny Samopše are frying the contemporary CPUs quite good. This makes me a bit afraid whether it will be possible to utilize the idea in the final game with the whole map and hundreds (thousands) of NPCs, especially as regards the consoles. Could you shed some lights on the computing requirements of the AI you are producing? Is that an issue that is also on your mind, or do you believe it won’t really be problem?

actually it is an inside joke, more or less. but its not related to the architectural part of coding, more to throwing grenades

about the Uncanny Valley and AI in Games - imho here it is rather problematic to apply, since the issue comes from real life robotics where you perceive the subject of validation in reality. In games, you have the virtuality of the game as a “different optic” to use.

I think computers as we know them will never be capable to project stuff into our brains, since the complexity of such projections are beyond computable means of any near future.

yea, we know about it and it is not the issue of how many are there, but how scheduling and LOD is done. I cannot talk about numbers, since I dont have any late profilings done, but we have some tricks up the sleeve in respect to more complex scheduling and less active more passive approaches to some parts of the AI. We cannot put everything at once into production, due to the fact that it has to be debugged thoroughly and more crazy shit means more crazy debugging.

5 Likes

Could imagine that not every NPC your complete own intelligence needed because there in the Fall “human NPC” yes biologically similar nature is. “Male”, “woman”, “child / teenager”, might involve levels of age and the intellect differences. Imagine (sorry, Laie) a “home library” before, from the NPCs get her just appropriate / necessary thought processes. The “home library” should, however, assume gigantic proportions. :worried:
Especially since the routines only the NPCs are affected, which are influenced by the player in his immediate environment. Similarly, it may be (to be observed by the player) with necessary scripted behavioral reactions …? Theory for non-visible surfaces is sufficient to play naked arithmetic operation.
Very interesting topic !!! Thank you for your Info’s!

actually the intelligence can be spread into the environment to mitigate the “gigantic proportions” and the various can be context specific - e.g. doing something in an area can differ for male/female/soliders etc. but still is conceptualy the same thing - e.g. cooking.

a lot of the “what is done” can be tied to the data, i.e. animations, action generalisms (movement, combat etc). for example, a lot of the logic thus can be bound to the data and again, can be context dependant

an example - animation of opening a door has events when the door is to be opened (neglect the fact that there are multiple animatiosn - one on the character, the other on the door). the acutal opening of the door is stored in the door and is executed as a subpart of the traversing of the door. secondly, the door logic spawns within the NPC, where it manifests as “the NPCs idea of opening a door”. the NPC can contextually choose a subsubpart of the logic - e.g. for a soldier in combat, opening a door is “kicking the door in”. within the actual animations there is a part where the door opens, and again, this may have various effects - e.g kicking the door in will unlock it because the lock brakes, lockpicking will not brake the door’s lock

so in end effect, you can build parts - or atoms - of the logical world and they combine into more complex stuff.

however, the downside is that it is pretty hard to make any Level Of Detail logic - thus if the player is far, you have to avoid complex computations (e.g. animations of the skeleton, complex vector logic etc) and only have specific effect - e.g. the kicking in of the door makes the door open, lock is broken, NPC is at a specific position behind the door.

but this also can be solved, but is a little bit more complicated to anotate or analyze.

anyhow - some more numebrs for the 0.5 version of the (Inception) AI Engine - numbers of active NPCs gone up another 25%, supportive mechanisms gone up by 30%
optimizations, logic pasivation etc made the AI code about 30-40% faster (its an observative guess based on the fps)

we added perception which is very computationally intensive, but nevertheless well written.

we have still a lot to optimize, schedule and do some threading work and async stuff.

6 Likes