Creating a novel adaptive AI for a Real-Time Fighting game

Show simple item record

dc.contributor.author Muti, Abdullah Al
dc.contributor.author Khan, Iftikhar Imrul
dc.contributor.author zaz, Taszid I
dc.date.accessioned 2025-03-11T05:36:19Z
dc.date.available 2025-03-11T05:36:19Z
dc.date.issued 2024-06-27
dc.identifier.citation [1] M. Campbell, A. J. Hoane Jr, and F.-h. Hsu, “Deep blue,” Artificial intelligence, vol. 134, no. 1-2, pp. 57–83, 2002. [2] B. U. Cowley and D. Charles, “Adaptive artificial intelligence in games: Issues, requirements, and a solution through behavlets-based general player modelling,” arXiv preprint arXiv:1607.05028, 2016. [3] A. Dachowicz, K. Mall, P. Balasubramani, et al., “Mission engineering and de sign using real-time strategy games: An explainable ai approach,” Journal of Mechanical Design, vol. 144, no. 2, p. 021 710, 2022. [4] N. Esposito, “A short and simple definition of what a videogame is,” in Proceed ings of DiGRA 2005 Conference: Changing Views: Worlds in Play, 2005. [5] E. A. Feinberg and A. Shwartz, Handbook of Markov decision processes: methods and applications. Springer Science & Business Media, 2012, vol. 40. [6] R. D. Gaina, S. M. Lucas, and D. Perez-Liebana, “Rolling horizon evolution en hancements in general video game playing,” in 2017 IEEE Conference on Com putational Intelligence and Games (CIG), IEEE, 2017, pp. 88–95. [7] S. Ganzfried and T. Sandholm, “Game theory-based opponent modeling in large imperfect-information games,” in Proceedings of the 10th International Confer ence on Autonomous Agents and Multiagent Systems (AAMAS), vol. 2, 2011, pp. 533– 540. [8] M. Hausknecht and P. Stone, “Deep recurrent q-learning for partially observ able mdps,” in Proceedings of the AAAI Fall Symposium Series, 2014, pp. 29–37. [9] H. He, J. Boyd-Graber, K. Kwok, and H. D. III, “Opponent modeling in deep reinforcement learning,” in Proceedings of the 33rd International Conference on Machine Learning (ICML), 2016, pp. 1804–1813. [10] R. A. Howard, “Dynamic programming,” Management Science, vol. 12, no. 5, pp. 317–348, 1966. 33 [11] S. Huang, S. Ontañón, C. Bamford, and L. Grela, “Gym-𝜇rts: Toward affordable full game real-time strategy games research with deep reinforcement learning,” in 2021 IEEE Conference on Games (CoG), IEEE, 2021, pp. 1–8. [12] M. Kempka, M. Wydmuch, J. Runc, T. Toczek, and W. Jaśkowski, “Vizdoom: A doom-based ai research platform for visual reinforcement learning,” in 2016 IEEE Conference on Computational Intelligence and Games (CIG), 2016, pp. 1–8. [13] D.-W. Kim, S. Park, and S.-i. Yang, “Mastering fighting game using deep rein forcement learning with self-play,” in 2020 IEEE Conference on Games (CoG), IEEE, 2020, pp. 576–583. [14] M. Kim and K. Kim, “Opponent modeling based on action table for mcts-based fighting game ai,” in 2017 IEEE Conference on Computational Intelligence and Games (CIG), 2017, pp. 178–180. [15] G. T. Lam, D. Logofătu, and C. Bădică, “A novel real-time design for fighting game ai,” Evolving Systems, vol. 12, pp. 169–176, 2021. [16] F. Lu, K. Yamamoto, L. H. Nomura, S. Mizuno, Y. Lee, and R. Thawonmas, “Fighting game artificial intelligence competition platform,” in 2013 IEEE 2nd Global Conference on Consumer Electronics (GCCE), IEEE, 2013, pp. 320–323. [17] S. M. Lucas, S. Samothrakis, and D. Perez, “Fast evolutionary adaptation for monte carlo tree search,” in EvoGames, 2014. [18] K. Majchrzak, J. Quadflieg, and G. Rudolph, “Advanced dynamic scripting for fighting game ai,” in Lecture Notes in Computer Science, vol. 9353, 2015, pp. 86– 99. [19] I. Millington, AI for Games. CRC Press, 2019. [20] V. Mnih, K. Kavukcuoglu, D. Silver, et al., “Human-level control through deep reinforcement learning,” Nature, vol. 518, no. 7540, pp. 529–533, 2015. [21] V. Mnih, K. Kavukcuoglu, D. Silver, et al., “Human-level control through deep reinforcement learning,” nature, vol. 518, no. 7540, pp. 529–533, 2015. [22] I. Oh, S. Rho, S. Moon, S. Son, H. Lee, and J. Chung, “Creating pro-level ai for a real-time fighting game using deep reinforcement learning,” IEEE Transactions on Games, vol. 14, no. 2, pp. 212–220, 2021. [23] OpenAI, “Dota 2 with large scale deep reinforcement learning,” arXiv preprint arXiv:1912.06680, 2019. 34 [24] D. Perez, S. Samothrakis, and S. M. Lucas, “Knowledge-based fast evolutionary mcts for general video game playing,” in IEEE Conference on Computational Intelligence and Games, 2014, pp. 1–8. [25] D. Perez, S. Samothrakis, S. M. Lucas, and P. Rolfshagen, “Rolling horizon evo lution versus tree search for navigation in single-player real-time games,” in Proceedings of the Genetic and Evolutionary Computation Conference (GECCO), 2013, pp. 351–358. [26] D. Perez-Liebana, J. Liu, A. Khalifa, R. D. Gaina, J. Togelius, and S. M. Lucas, “General video game ai: A multitrack framework for evaluating agents, games, and content generation algorithms,” IEEE Transactions on Games, vol. 11, no. 3, pp. 195–214, 2019. [27] D. Perez-Liebana, S. Samothrakis, J. Togelius, S. M. Lucas, and T. Schaul, “Gen eral video game ai: Competition, challenges and opportunities,” in 30th AAAI Conference on Artificial Intelligence, 2016. [28] A. Ram, S. Ontañón, and M. Mehta, “Artificial intelligence for adaptive com puter games.,” in FLAIRS, 2007, pp. 22–29. [29] F. Safadi, R. Fonteneau, and D. Ernst, “Artificial intelligence in video games: Towards a unified framework,” International Journal of Computer Games Tech nology, vol. 2015, pp. 5–5, 2015. [30] N. Sato, S. Temsirirkkul, S. Sone, and K. Ikeda, “Adaptive fighting game com puter player by switching multiple rule-based controllers,” in 3rd International Conference on Applied Computing and Information Technology/2nd International Conference on Computational Science and Intelligence, 2015, pp. 52–59. [31] D. Silver, A. Huang, C. J. Maddison, et al., “Mastering the game of go with deep neural networks and tree search,” nature, vol. 529, no. 7587, pp. 484–489, 2016. [32] D. Silver, A. Huang, C. J. Maddison, et al., “Mastering the game of go with deep neural networks and tree search,” Nature, vol. 529, no. 7587, pp. 484–489, 2016. [33] S. Sukhbaatar, Z. Lin, I. Kostrikov, G. Synnaeve, A. Szlam, and R. Fergus, “In trinsic motivation and automatic curricula via asymmetric self-play,” arXiv preprint arXiv:1703.05407, 2017. [34] R. S. Sutton and A. G. Barto, “Reinforcement learning: An introduction mit press,” Cambridge, MA, vol. 22447, 1998. [35] M. Świechowski, D. Lewiński, and R. Tyl, “Combining utility ai and mcts to wards creating intelligent agents in video games, with the use case of tactical 35 troops: Anthracite shift,” in 2021 IEEE Symposium Series on Computational In telligence (SSCI), IEEE, 2021, pp. 1–8. [36] Z. Tang, Y. Zhu, D. Zhao, and S. M. Lucas, “Enhanced rolling horizon evolu tion algorithm with opponent model learning: Results for the fighting game ai competition,” IEEE Transactions on Games, vol. 15, no. 1, pp. 5–15, 2020. [37] C. Tessler, S. Givony, T. Zahavy, D. J. Mankowitz, and S. Mannor, “A deep hier archical approach to lifelong learning in minecraft,” in Proceedings of the 31st Conference on Neural Information Processing Systems, 2017, pp. 4905–4914. [38] C. Tian, Z. Xu, L. Wang, and Y. Liu, “Arc fault detection using artificial intel ligence: Challenges and benefits,” Mathematical Biosciences and Engineering, vol. 20, no. 7, pp. 12 404–12 432, 2023. [39] O. Vinyals, I. Babuschkin, W. M. Czarnecki, et al., “Grandmaster level in star craft ii using multi-agent reinforcement learning,” Nature, vol. 575, no. 7782, pp. 350–354, 2019. [40] Wikipedia, Artificial intelligence in video games, Accessed: 2024-06-03, 2024. [Online]. Available: https://en.wikipedia.org/wiki/Artificial_intelligence_ in_video_games. [41] Y. Wu, A. Yi, C. Ma, and L. Chen, “Artificial intelligence for video game visual ization, advancements, benefits and challenges,” Mathematical Biosciences and Engineering, vol. 20, no. 8, pp. 15 345–15 373, 2023. [42] B. Xia, X. Ye, and A. O. Abuassba, “Recent research on ai in games,” 2020 Inter national Wireless Communications and Mobile Computing (IWCMC), pp. 505– 510, 2020. [43] G. N. Yannakakis and J. Togelius, “A panorama of artificial and computational intelligence in games,” IEEE Transactions on Computational Intelligence and AI in Games, vol. 7, no. 4, pp. 317–335, 2014. [44] G. N. Yannakakis and J. Togelius, Artificial Intelligence and Games. Springer, 2018, vol. 2. [45] S. Yoshida, M. Ishihara, T. Miyazaki, Y. Nakagawa, T. Harada, and R. Thawon mas, “Application of monte-carlo tree search in a fighting game ai,” in 2016 IEEE 5th Global Conference on Consumer Electronics (GCCE), 2016, pp. 1–2. 3 en_US
dc.identifier.uri http://hdl.handle.net/123456789/2378
dc.description Supervised by Mr. Md. Nazmul Haque, Assistant Professor, Department of Computer Science and Engineering (CSE) Islamic University of Technology (IUT) Board Bazar, Gazipur, Bangladesh This thesis is submitted in partial fulfillment of the requirement for the degree of Bachelor of Science in Software Engineering, 2024 en_US
dc.description.abstract AI has transformed how we play, bringing both online and offline gaming to excit ing new levels.While progress has been made in many areas, research in game AI hasn’t kept up like other fields. In this research, we want to create adaptable AI for video games that learns in real time from both the environment and opponents. Our pipeline consists of two major modules: a Monte Carlo Tree Search (MCTS) module for decision-making and a machine learning module to handle the action list. We use reinforcement learning, specifically Q-learning, to dynamically change the action list, which boosts the AI’s performance. Our results show that this hybrid method not only improves AI speed and efficacy, but also helps it to react in real time, mimicking human strategic behavior. Experiments on limited-resources show that our approach works, with high win rates against a variety of AI opponents. Our approach provides an insight into combining MCTS and reinforcement learning for real-time adaptive AI in gaming situations. en_US
dc.language.iso en en_US
dc.publisher Department of Computer Science and Engineering(CSE), Islamic University of Technology(IUT), Board Bazar, Gazipur-1704, Bangladesh en_US
dc.title Creating a novel adaptive AI for a Real-Time Fighting game en_US
dc.type Thesis en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search IUT Repository


Advanced Search

Browse

My Account

Statistics