Based on the Word2Vec study by Tomas Mikolov at Google, in my second year project working in Unreal Engine 5, I developed a unique, novel replacement for AI decision making systems that I have called "Vector-based Decision Making" or VDM for short. This system stands on its own to replace the default decision making system provided in UE5 (behaviour trees). The effectiveness of this system in comparison to the standard alternative is yet to be observed but this system added much needed nuance to the non-player characters that earned the project nearly full marks.
Coming into my final year, it was time to decide upon a project, an easy choice given my vested interest and already established tool, VDM. The pre-existing tool while limited had a great scope of potential expansion. As a system based on hard-coded weighting, there was the option of adapting the tool for machine learning. Alternatively, it was clear I could also benefit from creating a metric to measure the quality of the decision making system. After a discussion with my supervisor, it was decided that I would focus on implementing machine learning methods into the system, then evaluating the performance of the old system with the new!
This will require user testing, where I would invite users to play a portion of a game specifically designed for the test that involved interaction with the AI.
What makes an NPC's AI good? Is it the AI's competency? Is it how realistic it acts? Maybe it's how nuanced it is? These, in part, are all factors in what constitutes a good AI, but this is just the tip of the iceberg. 'Good', in terms of AI, and categorically defining it could be it's own project entirely, and I don't have that type of time! So, we must come up with our own definition for now.
A 'good' AI is the perfect blend of immersive, capable, while not being overly challenging to the point of unfairness - ultimately, enjoyable. That sounds good. This will be the criteria by which our participants of the study will judge the AI systems by.
Time to develop a survey! We know what we want our AI to be (enjoyable, not too hard, immersive) so we can build our survey around deriving how our participants find our game demo in accordance with these variables! That's a lot of waffle to say: we want to know how hard they found the AI to play against, how fun it was to play against, and how realistically it acted.
It's worth bringing up difficulty. This is, to put it lightly, a difficult subject. We can ask a participant "how difficult did you find this?" but this does not consider several factors that weigh in to the way we perceive dificulty. I can come up with three factors we aren't even considering:
Thus, I am going to acquire this information from each participant: how they perceive their mechanical skill, how they'd perceive their tactical ability, and how adaptable they are. This is all using their own perceptions, so there is some bias, but as this is a discussion on 'perceived difficulty', this shouldn't be a problem. With this information, we could make an equation... but we'll get onto that later.
Copyright © 2024 alfiejohnson.dev - All Rights Reserved.
We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.