Mobile robots intended to perform high-level tasks have to recognize objects in their workspace. In order to increase the success of the recognition process, recent works have studied the use of contextual information. Probabilistic Graphical Models (PGMs) and Semantic Knowledge (SK) are two well-known approaches for dealing with contextual information, although they exhibit some drawbacks: the PGMs complexity exponentially increases with the number of objects in the scene, while SK are unable to handle uncertainty. In this work we combine both approaches to address the object recognition problem. We propose the exploitation of SK to reduce the complexity of the probabilistic inference, while we rely on PGMs to enhance SK with a mechanism to manage uncertainty. The suitability of our method is validated through a set of experiments, in which a mobile robot endowed with a Kinect-like sensor captured 3D data from 25 real environments, achieving a promising result of ~94% of success.