Software Engineering today is increasingly faced with the challenge of creating systems that involve both software and physical systems -- or CPS -- from robotic systems, to autonomous vehicles, to increasingly sophisticated medical devices. In such systems, physical and software components are deeply intertwined, each operating on different spatial and temporal scales, exhibiting multiple and distinct behavioral modalities, and interacting with each other in a myriad of ways that change with context.
CPS often need to self-adapt their structure and behavior at run time to respond to changes in their operating environment.
Existing approaches to engineering self-adaptation have at their core a set of models used to support reasoning about when and how to best adapt the system at run time. Combining information from such models to feed run-time analysis and planning processes for self-adaptation in pure software systems is relatively straightforward because all the models describe software.
However, models in a CPS are much more heterogeneous in terms of representation, semantics, and facet of the domain that they capture (e.g., energy consumption, software architecture, physical space, safety). This heterogeneity poses a fundamental challenge in bringing together the information of these models to support self-adaptation in CPS. This challenge is complicated by the inherent uncertainty that arises both from the imprecision of the models, as well as the variability and lack of predictability of the environment.
This talk provides an overview of a model-based synthesis and quantitative verification approach to decision-making for self-adaptation that has been applied to the domain of mobile service robots.