Abstract
Agents have been applied to a wide variety of fields, including power systems and spacecraft. Belief-Desire-Intention (BDI) agents, as one of the most widely used and researched architectures, have the advantage of being able to pursue multiple goals in parallel. The problem of deciding what to do next at each of the agent s deliberation cycle is therefore critical for BDI agents, which is defined as the intention progression problem (IPP). Among all existing approaches to IPP, the majority of approaches have overlooked the significance of runtime historical data, thereby limiting the adaptability and decision-making capabilities of agents. In this paper, we propose to incorporate online learning into the current state-of-the-art intention progression approach SA to overcome the above limitations. This approach not only prevents SA from consuming computational resources on ineffective and inefficient simulations, but also significantly improves the execution efficiency of the agent. Especially when dealing with large-scale problem domains, this improvement significantly enhances the planning capability of the agents. In particular, we have proposed the SAQ and SAL schedulers, both of which can learn how to generate reasonable rollouts during the simulation phase of MCTS based on historical simulation data at run time. We compare the performance of our approach with the state-of-the-art SA in a range of scenarios of increasing difficulty. The results demonstrate that our approaches outperform SA, both in terms of the number of goals achieved and the computational overhead required.
Original language | English |
---|---|
Pages (from-to) | 56400-56413 |
Number of pages | 14 |
Journal | IEEE Access |
Volume | 12 |
DOIs | |
Publication status | Published - 2024 |
Keywords
- BDI agents
- Monte-Carlo tree search
- intention progression problem
- online learning
ASJC Scopus subject areas
- General Computer Science
- General Materials Science
- General Engineering