In various image processing tasks, enhancing resolution is a fundamental challenge, particularly along specific axes where resolution tends to be lower. This limitation can hinder the performance of models in tasks such as medical image analysis. Traditional approaches often involve interpolation techniques, but they may lead to loss of information or introduce artifacts. Recently, deep learning-based methods, especially those utilizing latent spaces, have shown promise in addressing this issue. Because typical super-resolution methods are designed for 2D images, they can easily be applied to increase resolution in two of the axes in a volumetric MRI, but not the other axis. While volumetric (3D) deep learning models for super-resolution have been proposed, they have very high computational requirements, even if the region of interest to super-resolve does not span the whole volume. In our work, we propose a novel approach that uses a diffusion latent model to increase resolution along an arbitrary axis. Our method involves transforming input images into a latent space, where a U-Net model is employed to capture high-level features. Crucially, just before decoding, we introduce a linear interpolation in the latent space to enhance resolution along the specified axis. This interpolated latent representation is then decoded by the decoder, yielding images with increased resolution, thus achieving a resolution across all axes and, therefore, an increase in resolution of the entire volume, using a 2D deep learning model rather than a fully-fledged 3D model. The proposal has been extensively tested with a wide range of brain lesions and brain tumor images of T1, T2, and FLAIR modes. The experimental comparison with several state-of-the-art methods has consistently shown the advantages of our approach.