Cooperative patrol routing: Optimizing urban crime surveillance through multi-agent reinforcement learning
| dc.centro | E.T.S.I. Informática | |
| dc.contributor.author | Palma-Borda, Juan | |
| dc.contributor.author | Guzmán-de-los-Riscos, Eduardo Francisco | |
| dc.contributor.author | Belmonte-Martínez, María Victoria | |
| dc.date.accessioned | 2026-02-09T16:49:50Z | |
| dc.date.issued | 2026 | |
| dc.departamento | Lenguajes y Ciencias de la Computación | |
| dc.description.abstract | Patrolling can be defined as the act of visiting locations of interest at regular intervals for either surveillance, control, protection, or monitoring purposes. The effective design of patrol strategies is a difficult and complex problem, especially in medium and large areas. The objective is to plan, in a coordinated manner, the optimal routes for a set of patrols in a given area, in order to achieve maximum coverage of the area, while also trying to minimize the number of patrols. In this paper, we propose a multi-agent reinforcement learning (MARL) model, based on a decentralized partially observable Markov decision process, to plan unpredictable patrol routes within an urban environment represented as an undirected graph. The model attempts to maximize a target function that characterizes the environment within a given time frame. Our model has been tested to optimize police patrol routes in three medium-sized districts of the city of Málaga. The aim was to maximize surveillance coverage of the most crime-prone areas, based on actual crime data in the city. To address this problem, several MARL algorithms have been studied, and among these the Value Decomposition Proximal Policy Optimization (VDPPO) algorithm exhibited the best performance. We also introduce a novel metric, the coverage index, for the evaluation of the coverage performance of the routes generated by our model. This metric is inspired by the predictive accuracy index (PAI), which is commonly used in criminology to detect hotspots. Using this metric, we have evaluated the model under various scenarios in which the number of agents (or patrols), their starting positions, and the level of information they can observe in the environment have been modified. Results show that the coordinated routes generated by our model achieve a coverage of more than 90% of the 3% of graph nodes with the highest crime incidence, and 65% for 20% of these nodes; 3% and 20% represent the coverage standards for police resource allocation. The source code of our implementation is available in this public repository (https://github.com/iacomlab/marl-patrol-routing). The data cannot be provided for confidentiality reasons, as they contain sensitive information. | |
| dc.description.sponsorship | Funding for open access charge: Universidad de Málaga / CBUA | |
| dc.identifier.citation | Juan Palma-Borda, Eduardo Guzmán, María-Victoria Belmonte, Cooperative patrol routing: Optimizing urban crime surveillance through multi-agent reinforcement learning, Engineering Applications of Artificial Intelligence, Volume 166, Part B, 2026, 113706, ISSN 0952-1976, https://doi.org/10.1016/j.engappai.2025.113706. | |
| dc.identifier.doi | https://doi.org/10.1016/j.engappai.2025.113706 | |
| dc.identifier.uri | https://hdl.handle.net/10630/45301 | |
| dc.language.iso | eng | |
| dc.publisher | Elsevier | |
| dc.rights | Attribution-NonCommercial-NoDerivatives 4.0 International | en |
| dc.rights.accessRights | open access | |
| dc.rights.uri | http://creativecommons.org/licenses/by-nc-nd/4.0/ | |
| dc.subject | Aprendizaje automático (Inteligencia artificial) | |
| dc.subject | Delincuencia | |
| dc.subject.other | Multi-agent reinforcement learning | |
| dc.subject.other | Cooperative routing optimization | |
| dc.subject.other | Police patrolling | |
| dc.subject.other | Crime hotspots | |
| dc.title | Cooperative patrol routing: Optimizing urban crime surveillance through multi-agent reinforcement learning | |
| dc.type | journal article | |
| dc.type.hasVersion | AM | |
| dspace.entity.type | Publication | |
| relation.isAuthorOfPublication | 4e6e1c0f-4b04-4899-981f-e581587b0176 | |
| relation.isAuthorOfPublication | f31a2936-3203-4292-9432-f3b488877740 | |
| relation.isAuthorOfPublication.latestForDiscovery | 4e6e1c0f-4b04-4899-981f-e581587b0176 |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- 1-s2.0-S0952197625037388-main.pdf
- Size:
- 3.91 MB
- Format:
- Adobe Portable Document Format

