I spent a great deal of time researching state searches regarding the usage of search algorithms such as A*. Using this algorithm to find valid paths within simulated or virtual environments has become popular and somewhat of a standard. It has become part of enhancing spacial understanding within artificial intelligence.
Some of the challenges involve properly informing and processing such state searches efficiently and effectively depending on the resolution and detail of how much awareness at what frequency is needed.
Like a lot of other things artificial intelligence revolves around the idea of immersion in interactive media such as simulation or games.
Looking at a lot of path finding solutions within simulation and games, it’s obvious that a lot of applications have such complex searches throttled in processing to accommodate other systems that occur during software runtime. It’s only necessary to process such searches either only on demand or at frequencies necessary as to not break immersion regarding the AI being interacted with.
Not all applications require massively detailed spaces or high numbers of AI entities that require individual path maneuvering, but it goes to say that this is something that I am interested in building scalable systems for.
Informing the algorithm with better spacial values is important because it can result in less steps during the search runtime. Such solutions have evolved over time to involve the standard term of Navigation Meshes or Geometry that provides geometric understanding of the spaces that are being searched.
Architectural systems such as message or event systems can trigger a search. In the event of a search running, the A* algorithm to its benefit can begin its incremental iterations as to potentially pause or suspend its behavior on any one iteration mid search if the total search computational complexity is too resource heavy that it impedes on other operations. While this is a solution to the problem of needing to dispatch an expensive search, by splitting it across many real time interactive frames, it causes somewhat of a dependency on other systems being run on a frame to frame basis. This of course limits how large a search that might be able to be squeezed along side other systems as to result in the search finishing quick enough.
Threaded searches are something that I explored to contend with the idea above. If a computationally expensive search was needed, being able to rely on an unhinged thread especially on multi core systems would be greatly beneficial and the idea of throttling thread activity between iterations of the search can still be used to allow other threads to operate. In essence putting an expensive A* search on a unique thread and controlling how much time can be allocated for said search. If a search doesn’t finish in allocated time limit, then the search can be resumed when desired. Another benefit is that sometimes environmental path re planning is necessary while in mid search. In this event the unique A* thread can be reset immediately to accommodate this occurrence instead of needing to wait for prior search results that are now invalid to finish. Dynamic changes to environments are often the cause of needing to search for a valid path again as well as interactive changes such human user input that result in an AI response.
In contrast with a really complex search, executing many smaller searches have about the same problem on a single thread. A look at dedicating A* threads for each search individually reveals that there is the potential for at least spreading the CPU resource evenly among searches executing simultaneously. Context switches between many threads between scheduling being the primary concern, and could be combated if the complexity of each search was known ahead of time as to partition search work evenly across the maximum threads supported by the remaining hardware resources. However it’s more likely that any one given search complexity will not be known ahead of time.
I’ll publish some more updates regarding research and conclusions regarding this…
To be continued