This article proposes the development of a test environment for the detection of traffic participants in urban environments using neural networks based on the processing of data from vehicle sensors: an RGB camera and a 3D LiDAR sensor. It presents the integration of the realistic simulator CARLA (Car Learning to Act), which allows the detailed recreation of complex urban scenarios, together with ROS2 (Robot Operating System), which is a framework for the development of robotic applications. Specifically, for the case of RGB images, the performance of the CNN (Convolutional Neural Network) YOLOv8 and the DETR (Detection Transformer) is qualitatively evaluated. Similarly, for the detection of traffic participants in point clouds, the PV- RCNN (PointVoxel Regional based Convolutional Neural Network) and its evolution Part-A2-Net are analysed.