sbmf

# Keynotes

# Meaningful Models — A research agenda on model-driven engineering

Abstract: An important question in software engineering is whether a program (or system) is correct with respect to its specification. The model-driven engineering discipline (MDE) is an approach to software development that supports domain-engineering, is generative and language-driven. We believe that this set of characteristics enable MDE as a suitable approach for the rigorous development of correct software systems as it allows us to focus on models rather than code. In this paper, we illustrate how programming languages theory, through operational semantics, and logic in computer science, through Description Logics, may help us identify meta-properties and techniques to reason about MDE models.

## Speaker

Christiano Braga is associate professor of computer science at Universidade Federal Fluminense and leads the Language-oriented Software Engineering Research Group. He obtained his Ph.D. in Computer Science at Pontifícia Universidade Católica do Rio de Janeiro in 2001. He was an International Fellow at the Computer Science Laboratory of SRI International from 1998 to 2000, visiting scholar at the Formal Methods and Declarative Languages Laboratory of UIUC in 2003, and Ramón y Cajal researcher at the Departamento de Sistemas Informáticos y Computación of the Universidad Complutense de Madrid from 2006 to 2008. His research interests include semantics of programming languages and formal methods in software engineering. Some of his contributions are the Maude MSOS tool, for the specification and verification of programming language semantics, and the transformation contracts approach, for the formal development of model transformations.

# Deductive Generalization

Abstract: The empirical sciences are based on inductive inference, that is, the formation of theories by generalization from observations. In this process, scientists place a high value on the mathematical beauty or elegance of a theory. Apart from aesthetic concerns, mathematical simplicity has the virtue of constraining our speculations, preventing us from “over-fitting” the data, and thus increasing the chance that our theories will successfully account for new observations. This criterion is traditionally known as Occam’s razor (after the medieval philosopher William of Ockham, a frequent user though not originator of the principle).

In mathematical proof we observe a similar phenomenon. That is, conjectures and lemmas are often formed by plausible generalizations from particular cases. Imagine, for example, proving a conjecture about an object in N dimensions. We might first try to prove the special case of two or three dimensions, and then generalize the argument to the N-dimensional case. We would prefer a proof of the two-dimensional case that is simple, on the grounds that it will be less prone to depend on particular aspects of this case, thus more likely to generalize.

The appearance of Occam’s razor in logical proof should surprise us, since we require no heuristic justification of conclusions logically deduced from axioms. Nonetheless, such criteria can be highly valuable in the search for a proof, since they allow us to form conjectures that are plausibly valid and potentially useful, and thus to navigate the intractably large space of potential proofs.

We will illustrate these concepts by applying them to proofs about programs. To form generalizations, we use a methodology of interpolation: finding a hypothesis intermediate between a premise and a desired conclusion. We will set out heuristic criteria for generalizations in terms of evidence provided for both validity of the generalization and its utility in constructing a proof. These criteria can be used to discover inductive invariants that prove given properties of programs by considering only fragments of the program’s behavior. We observe that evidence for generalizations comes at a computational cost, and that in practice it is important to strike a balance between cost and quality of generalizations.

Moreover, we observe a subtle but significant difference between the use of Occam’s razor in deductive as opposed to inductive settings. That is, by considering the simplicity of the proof of a proposition in a particular case, we can make distinctions that cannot be made based on the simplicity of hypotheses alone.

## Speaker

Kenneth McMillan is a researcher in formal verification at Microsoft Research. He began his career as an electrical engineer, designing chips and biomedical instruments. His undergraduate degree is in EE, from the University of Illinois, and he has an MSEE from Stanford. From this experience, he became interested in the process of designing complex systems, and in particular, how we determine that complex system designs are correct. Switching to computer science, he did a Ph.D. with Ed Clarke at CMU, graduating in 1992. His thesis was on a technique called Symbolic Model Checking that could be used to verify logical properties of finite-state systems by efficiently exploring very large state spaces. He developed a model checker called SMV that implemented these techniques, as well as other techniques for combatting the so-called state explosion problem. He went on to do research at three commercial research labs: Bell Labs, Cadence Berkeley Labs, and (as of 2010) Microsoft Research. At Cadence, he further developed SMV, incorporating methods of compositionality and abstraction to break large verification problems down into small ones, in a tool called Cadence SMV. More recently, he has been mainly interested in the question of relevance. That is, if you want to prove a logical property of a complex object like an computer program or a hardware design, how do you decide what information about that system is relevant to that property? This leads to a technique called Craig interpolation that allows us to use logical proof as a basis for determining relevance, and can be used as the foundation of formal verification tools for hardware and software.