Cooperative patrol routing: Optimizing urban crime surveillance through multi-agent reinforcement learning

Loading...
Thumbnail Image

Identifiers

Publication date

Reading date

Collaborators

Advisors

Tutors

Editors

Journal Title

Journal ISSN

Volume Title

Publisher

Elsevier

Metrics

Google Scholar

Share

Research Projects

Organizational Units

Journal Issue

Abstract

Patrolling can be defined as the act of visiting locations of interest at regular intervals for either surveillance, control, protection, or monitoring purposes. The effective design of patrol strategies is a difficult and complex problem, especially in medium and large areas. The objective is to plan, in a coordinated manner, the optimal routes for a set of patrols in a given area, in order to achieve maximum coverage of the area, while also trying to minimize the number of patrols. In this paper, we propose a multi-agent reinforcement learning (MARL) model, based on a decentralized partially observable Markov decision process, to plan unpredictable patrol routes within an urban environment represented as an undirected graph. The model attempts to maximize a target function that characterizes the environment within a given time frame. Our model has been tested to optimize police patrol routes in three medium-sized districts of the city of Málaga. The aim was to maximize surveillance coverage of the most crime-prone areas, based on actual crime data in the city. To address this problem, several MARL algorithms have been studied, and among these the Value Decomposition Proximal Policy Optimization (VDPPO) algorithm exhibited the best performance. We also introduce a novel metric, the coverage index, for the evaluation of the coverage performance of the routes generated by our model. This metric is inspired by the predictive accuracy index (PAI), which is commonly used in criminology to detect hotspots. Using this metric, we have evaluated the model under various scenarios in which the number of agents (or patrols), their starting positions, and the level of information they can observe in the environment have been modified. Results show that the coordinated routes generated by our model achieve a coverage of more than 90% of the 3% of graph nodes with the highest crime incidence, and 65% for 20% of these nodes; 3% and 20% represent the coverage standards for police resource allocation. The source code of our implementation is available in this public repository (https://github.com/iacomlab/marl-patrol-routing). The data cannot be provided for confidentiality reasons, as they contain sensitive information.

Description

Bibliographic citation

Juan Palma-Borda, Eduardo Guzmán, María-Victoria Belmonte, Cooperative patrol routing: Optimizing urban crime surveillance through multi-agent reinforcement learning, Engineering Applications of Artificial Intelligence, Volume 166, Part B, 2026, 113706, ISSN 0952-1976, https://doi.org/10.1016/j.engappai.2025.113706.

Endorsement

Review

Supplemented By

Referenced by

Creative Commons license

Except where otherwised noted, this item's license is described as Attribution-NonCommercial-NoDerivatives 4.0 International