Abstract

Adaptive software systems are designed to cope with unpredictable and evolving usage behaviors and environmental conditions. For these systems reasoning mechanisms are needed to drive evolution, which are usually based on models capturing relevant aspects of the running software. The continuous update of these models in evolving environments requires efficient learning procedures, having minimum overhead and being robust to changes. Most of the available approaches achieve one of these goals at the price of the other.

In this paper we propose a lightweight adaptive filtering technique to accurately learn time-varying transition probabilities of discrete time Markov models, which provides robustness to noise and fast adaptation to changes with a very low overhead. A formal assessment of the learning approach is provided based on control theory, as well as an experimental comparison with state-of-the-art alternatives.

Authors: Antonio Filieri, Lars Grunske, and Alberto Leva

Paper in pdf

Supplementary material

Python scripts for micro benchmarks evaluation.

  • run compareMethodsStatic.py to compare on user specified benchmark patterns instances
  • run compareMethodsRandomDistributions.py to compare on randomly generated benchmark patterns instances
  • run compareMethodsRandomDistributionsMixed.py to compare on a mix of randomly generated distributions