Metrics to Estimate Model Comprehension Quality: Insights from a Systematic Literature Review

Jordan Hermann, Bastian Tenbergen, Marian Daun

Abstract


Conceptual models are an effective and unparalleled means to communicate complicated information with a broad variety of stakeholders in a short period of time. However, in practice, conceptual models often vary in clarity, employed features, communicated content, and overall quality. This potentially impacts model comprehension to a point where models are factually useless. To counter this, guidelines to create “good” conceptual models have been suggested. However, these guidelines are often abstract, hard to operationalize in different modeling languages, partly overlap, or even contradict one another. In addition, no comparative study of proposed guidelines exists so far. This issue is exacerbated as no established metrics to measure or estimate model comprehension for a given conceptual model exist. In this article, we present the results of a literature survey investigating 109 publications in the field and discuss metrics to measure model comprehension, their quantification, and their empirical substantiation. Results show that albeit several concrete quantifiable metrics and guidelines have been proposed, concrete evaluative recommendations are largely missing. Moreover, some suggested guidelines are contradictory, and few metrics exist that allow instantiating common frameworks for model quality in a specific way.

Keywords:

Model-Based Development; Model-Based Engineering; Model-Based Software Engineering; Graphical Representations; Model Comprehension; Model Quality; Literature Survey

Full Text:

PDF


DOI: 10.7250/csimq.2022-31.01

Refbacks

  • There are currently no refbacks.


Copyright (c) 2022 Jordan Hermann, Bastian Tenbergen, Marian Daun

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.