Associated Events at the ISoLA Week:
- "International School on Tool-based Rigorous Engineering of Software Systems -STRESS 2014"
- Challenge on Rigorous Examination of Reactive Systems
- Doctoral Symposium with Poster Session and Postconference Proceedings in Springer's CCIS
- Tutorial: Automata Learning in Practice
- Industrial Day
- Statistical Model Checking, Past Present and Future
- Formal Methods and Analysis in Software Product Line Engineering
- Risk-Based Testing
- Scientific Workflows
- Medical Cyber Physical Systems
- Evaluation and Reproducibility of Program Analysis
- Automata Learning
- Rigorous Engineering of Autonomic Ensembles
- Engineering Virtualized Services
- Security and Dependability for Resource Constrained Embedded Systems
- Semantic heterogeneity in the formal development of complex systems
- Evolving Critical Systems
- Model-based Code-Generators and Compilers
- Processes and data integration in the networked healthcare
(TU Dortmund, D)
(Univ. Potsdam, D)
Statistical Model Checking, Past Present and
Organizers: Kim Larsen, Axel Legay
Formal Methods and Analysis in Software
Product Line Engineering
Organizers: Ina Schäfer, Maurice ter Beck
Organizers: Michael Felderer, Marc-Florian Wendland, Ina Schieferdecker
Organizers: Joost Kok, Anna-Lena Lamprecht, Ken Turner, Katy Wolstencroft
Medical Cyber Physical Systems
Organizers: Ezio Bartocci, Sicun Gao, Scott Smolka
Evaluation and Reproducibility of Program
Organizers: Markus Schordan (Lawrence Livermore National Laboratory, CA, USA), Welf Lowe (Linnaeus University, Sweden), Dirk Beyer (University of Passau, Germany)
Organizers: Falk Howar (CMU), Bernhard Steffen (TU Dortmund)
Rigorous Engineering of Autonomic
Organizers: Rocco De Nicola, Matthias Hölzl, Martin Wirsing
Engineering Virtualized Services
Organizers: Reiner Hähnle (TU Darmstadt), Einar Broch Johnsen (U Oslo)
Security and Dependability for Resource
Constrained Embedded Systems
Organizers: Brahim Hamid, Carsten Rudolph
Semantic heterogeneity in the formal
development of complex systems
Organizers: Idir Ait Sadoune (Supelec - Gif Sur Yvette - France. email@example.com), J Paul Gibson (Telecom Sud Paris -- Evry - France. firstname.lastname@example.org
Evolving Critical Systems
Organizers: M. Hinchey (Lero and Univ. Limerick, Ireland), T. Margaria (Univ. Potsdam)
Model-based Code-Generators and
Organizers: Jens Knoop (TU Wien, Austria), Wolf Zimmermann (University of Halle-Wittenberg, Germany), Uwe Assmann (TU Dresden, Germany)
Processes and data integration in the networked healthcare
Organizers: J. Mündler (3M-Deutschland), T. Margaria (Univ. Potsdam, Germany), C. Rasche (Univ. Potsdam, Germany)
Statistical Model Checking (SMC) has recently been proposed as an alternative to avoid an exhaustive exploration of the state space of a system under verification. The core idea of the approach is to conduct some simulations of the system and then use results from the statistics area in order to decide whether the system satisfies the property with respect to a given probability. The answer is correct up to some confidence. SMC is generally much faster than formal verification techniques, and it is closer to industry current practices. Moreover, the approach can be used to verify properties that cannot be expressed by the classical temporal logics used in formal verification as well as to approximate undecidable model checking problems. Impressive results have been obtained in various areas such as energy-centric systems, automotive, or systems biology.
This Special Track aims at bringing together researchers and practitioners working on SMC and on its integration in toolchains for the rigorous validation of software systems. Another objective is to discuss the potential application of SMC in other domains.
Kim Larsen, Axel Legay
Software product line engineering (SPLE) aims to develop a family of systems via systematic, large-scale reuse in order to reduce time-to-market and costs and to increase product quality. In order to achieve these goals, formal methods and analysis offer promising techniques, which are best applied throughout the product- line lifecycle in order to maximize their overall efficiency and effectiveness. While some analysis approaches (e.g., for feature modelling and variant management) and formal methods (e.g., BDDs, CSPs, SAT solvers, model checkers and formal semantics of variability models) have already been applied to SPLE, a considerable potential still appears to be unexploited. In fact, despite the work that we just mentioned, the respective communities (SPLE, formal methods and analysis) are only loosely connected.
This track brings together researchers and practitioners interested in raising the efficiency and effectiveness of SPLE by applying formal methods and innovative analysis techniques. Participants will be invited to review the state of the art and practice in their respective fields, identify further promising application areas, report practical requirements and constraints from real-world product lines, discuss drawbacks and complements of the various approaches, or present recent new ideas and results. The two long-term objectives of the FMSPLE workshop series are:
to raise awareness and to find a common understanding of practical challenges and existing solution approaches in the different communities working on formal methods and analyses techniques for SPLE, and
to create a broader community interested in formal methods and analysis techniques for SPLs in order to keep SPLE research and tools up-to-date with the latest technologies and with practical challenges.
Ina Schäfer, Maurice ter Beck
In many development projects, testing has to be done under severe pressure due to limited resources and a challenging time schedule. As a result, the resources for test design and test execution need to be optimized. Risk-based testing utilizes identified technical or functional risks of the system to be built in order to prioritize the activities along the test process, i.e. test planning, design, execution and evaluation. The precise understanding of risks, as well as the focused treatment of risks, has become one of the cornerstones for critical decisions within complex social and technical environments. This special track serves as a platform for researchers and practitioners to present approaches, results, experience and advances in risk-based testing. Its topics of interest include (and are not limited to):
- Risk assessment for testing purposes
- Risk-based test process optimization
- Risk-based test planning, design, prioritization or selection
- Model-based approaches to risk-based testing
- Support of standard-compliant quality assurance by risk-based testing (e.g. IEC 61508, ISO 26262, Common Criteria)
- Application of risk-based testing in an industrial environment
- Risk-based testing of safety-critical or security-critical systems -
- Empirical studies related to risk-based testing approaches
- Tools and languages for risk-based testing.
Michael Felderer, Marc-Florian Wendland, Ina Schieferdecker
In recent years, numerous software systems have been developed specifically for supporting the management of scientific processes and workflows. Research in this comparatively new field is currently evolving in interesting new directions. This special track aims at bringing together researchers and practitioners from various scientific application areas who are interested in discussing the various topics connected to scientific processes and workflows. Topics of interest for this track include (but are not limited to):
- modeling paradigms and techniques
- tools and frameworks for workflow composition
- workflow analysis, validation and verification
- workflow execution
- workflow provenance, sharing and experimental reproducibility
- workflow discovery methods
- ontologies for description of workflows and their data
- semantically aware workflow development
- automatic workflow composition
- social network aspects of workflow development
- case studies and examples
- experience and best practice
Joost Kok, Anna-Lena Lamprecht, Ken Turner, Katy Wolstencroft
Rapid progress in modern medical technologies has led to a new generation of health-care devices and treatment strategies. Examples include electro-anatomical mapping and intervention, bio-compatible and implantable devices, minimally invasive embedded devices, and robotic prosthetics.
All of these systems share a key characteristic: the tight integration of digital computation, responsible for control and communication in discrete-time, with a physical system, obeying laws of physics and evolving in continuous-time. These systems are called Medical Cyber-Physical Systems (Medical CPS). They are highly complex: the sensors monitoring the physical processes, the computation and communicating devices, and the actuators controlling the physical processes are spatially distributed yet tightly interconnected.
Concomitantly, their malfunctioning can do great harm to human health. Reliability of these systems is very challenging and extremely important. Research in Formal Methods is leading to mathematically rigorous techniques for ensuring the correct design and implementation.
Ezio Bartocci, Sicun Gao, Scott Smolka
Today's popular languages have a large number of different language constructs and standardized library interfaces. The number is further increasing with every new language standard. Most published analyses therefore focus on a subset of such languages or define a language with a few essential constructs of interest. More recently, program analysis competitions aim to evaluate comparatively implemented analyses for a given set of benchmarks. The comparison of the analyses either focuses solely on the analysis results themselves (verification of specified properties) or the impact on a client analysis or program optimization is evaluated.
This track is concerned with the methods of evaluation for comparing analyses and how analysis results can be represented such that they remain reproducible and reusable as intermediate results for other analyses.
We therefore focus on how analysis results can be specified and how to allow an exact recomputation of the analysis results irrespective of a chosen (internal) intermediate representation.
The topics of interest are
- Specification languages for program properties and program analysis results.
- Representation and specification of program analysis results in existing program analysis infrastructures, compilers, and tools along with meta-models and evolution of these representations.
- Reuse of verification results and combination of multiple verifiers using conditional model checking.
- Analysis benchmarking and experimental evaluation of analysis accuracy.
- Interoperability of analysis tools (accessibility of analysis results by external tools).
- Obstacles in combining tools that implement different approaches (e.g. model checking and data flow analysis tools).
Markus Schordan (Lawrence Livermore National Laboratory, CA, USA), Welf Lowe (Linnaeus University, Sweden), Dirk Beyer (University of Passau, Germany)
Specifications play an important role in modern-day software engineering. Formal specifications, e.g., are the basis for automated verification and testing techniques. In spite of their potentially great positive impact, formal specifications are notoriously hard to come by in practice. One reason seems to be that writing precise formal specifications is not an easy task for most of us.
As a consequence, e.g., many software systems in use today lack adequate specifications or make use of un/under-specified components. Moreover, in many practical contexts, revision cycle times are often extremely short, which makes the maintenance of formal specifications unrealistic. At the same time, component-based design and short development cycles necessitate extensive testing and verification effort.
Learning-based approaches have been proposed to overcome this situation by automatically 'mining' formal specifications. Promising results have been obtained here using active automata learning technology in verification and testing, and there seems to be a high potential to exploit also other learning techniques.
This track aims at bringing together practitioners and researchers to explore the practical impact and challenges associated with automated generation and maintenance of formal specifications using learning-based methods
Falk Howar (CMU), Bernhard Steffen (TU Dortmund)
Today's software systems are becoming increasingly distributed and decentralized and have to adapt autonomously todynamically changing, open-ended environments. Often the nodes partake in complex interactions with other nodes or with humans. We call such systems ensembles. Current software engineering methods are not adequate for dealing with ensembles: instead of static software that operates without knowledge about its environment and hence relies on manual configuration and optimization we have to build systems with self-aware, intelligent components that mimic natural features like adaptation, self-organization, and both autonomous and collective behavior. The aim of this special track is to bring together researchers and practitioners to present and discuss rigorous methods to engineer autonomic ensembles.Topics of interest include (but are not limited to):
- Modeling and programming techniques for autonomic ensembles
- Methods and mechanisms for adaptation and dynamic self-expression
- Validation and verification of autonomic ensembles
- Security and performance of autonomic ensembles
Rocco De Nicola, Matthias Hölzl, Martin Wirsing
Virtualized computing services, such as cloud services, create new opportunities, but also pose new challenges for users and providers alike. Over-provisioning of resources and compensative penalties for breaching an SLA are among the only too real downsides of virtualization. At the same time, formal models of software services and advanced static analysis tools promise vast improvements of productivity and cost effectiveness in cloud computing. This track explores the state-of-art in modeling of services deployed on the cloud and in the formalization as well as verification of SLAs.
Reiner Hähnle (TU Darmstadt), Einar Broch Johnsen (U Oslo)
The main focus of S&D4RCES is on the topic of making security and dependability expert knowledge available to Resource Constrained Embedded Systems (RCES) engineering processes. Special emphasis will be devoted to promote discussion and interaction between researchers and practitioners focused on the particularly challenging task to efficiently integrate security and dependability solutions within the restricted available design space for RCES. Furthermore, one important focus is on the potential benefits of the combination of model-driven engineering with pattern-based representation of security and dependability solutions. The track aims to bring together researchers from various fields involved in the development and deployment of RCES with a particular focus on the transfer of results from fundamental research to the industrial development of RCES. We believe that the synergy between researchers working in different aspects of this area will produce important benefits.
The objective of this track is to foster an exchange of ideas among practitioners, researchers and industry involved in the deployment of secure and dependable resource-constrained embedded systems. The exchange of concepts, prototypes, research ideas,
Brahim Hamid, Carsten Rudolph
Nowadays, the formal development of hardware and/or software systems implies the synthesis and analysis of several models on which properties are expressed and then formally verified. Parts of these systems are defined within contexts, imported and/or instantiated. Such contexts usually represent the implicit semantics of the systems. Several relevant properties are defined on these implicit semantics according to the formal technique being used. When considering these properties in their context with the associated explicit semantics, these properties may be not provable. Therefore, the development activities need to be revisited in order to facilitate handling of the explicit semantics.
This development process leads to semantic heterogeneity, which may appear in two different forms. The first is related to the large variety of formal development techniques and to the semantics and proof systems fundamental to these techniques. As a consequence, several formal descriptions may be associated to a given system with different semantics. The second type of heterogeneity results from the modelling domain chosen for formalising the described system. Usually, this domain is neither explicitly described nor formalised. Most of the knowledge related to this domain is not adequately encoded by the formal system description. The last decade has made use of ontologies as an explicit formalisation for such modelling domains. Expressing, in formal models, domain specific ontological concepts will contribute to reducing such heterogeneity. It also allows developers to address specific properties related to interoperable, adaptive, reconfigurable and plastic systems.
This thematic track is devoted to compiling the state of the art in semantic heterogeneity in the formal development of complex systems. It will highlight the recent advances in this research field. Particularly welcome are reports, research and position papers, issued either from the academic or industrial worlds, presenting
- original contributions,
- position papers, and
- experimental/industrial case studies.
Idir Ait Sadoune (Supelec - Gif Sur Yvette - France. email@example.com), J Paul Gibson (Telecom Sud Paris -- Evry - France. firstname.lastname@example.org
The need is becoming evident for a software engineering research community that focuses on the development and maintenance of Evolving Critical Systems (ECS). The software and systems engineering community must concentrate its efforts on the techniques, methodologies and tools needed to design, implement, and maintain critical software systems that evolve successfully (without risk of failure or loss of quality). Critical systems are systems where failure or malfunction will lead to significant negative consequences. These systems may have strict requirements for security and safety, to protect the user or others. Alternatively, these systems may be critical to the organization's mission, product base, profitability or competitive advantage.
In this track we welcome contributions to all ten areas of intervention in the lifecycle of software aretfacts and systems: Requirements, Design, Construction, Testing, Configuration Management (SCM), Engineering Management (SEM), Engineering Process, Engineering Tools and Methods, Quality, and Maintenance.
M. Hinchey (Lero and Univ. Limerick, Ireland), T. Margaria (Univ. Potsdam)
In the last years, model-based software development received more and more attraction. Often, models are expressed in a formal language - often a domain specific language (short: DSL) -, and implementations are derived by model-based code generators. There are toolboxes for defining domain-specific languages and generating compilers for them such as the Eclipse Modeling Framework (short: EMF). DSLs are defined by a Meta-Model and their compilation is by model-transformations. From these specifications, code generators can be generated. In addition, the toolboxes often generate editors, debuggers and embeddings in programming environments such as Eclipse. Meta models are frequently denoted by a graphical notation analogous to UML class diagrams or by context-free grammars. Consistency constraints are then specified by OCL or similar languages. Often, the generated code is manually improved. Therefore, some research focuses on the consistency between models and their implementations.
Compilers nearly perform the same task as model-based code generators: a program in a high-level programming language is translated into an equivalent program in machine language or another high-level programming language (cross-compiler). In this sense, a code-generator for a DSL that generates e.g. C-Code is a cross-compiler. However, the technology used for the implementation of compilers is rather different. For textual languages, the first step is in both cases a parser that generates an abstract syntax tree. Then consistency constraints (e.g. typing rules) are being analyzed and the abstract syntax tree is transformed in an intermediate representation. Finally, the target code is generated from this intermediate representation. Optimizations might be applied at all these steps. For compilers there are also toolboxes available that allow to generate compilers from specifications: context-free grammars for the concrete syntax and the abstract syntax, a mapping from concrete to abstract syntax, e.g. attribute grammars for specifying consistency constraints, tree transformations for specifying the generation of intermediate code, and bottom-up rewrite systems for specifying code selection. The technology is well-established and there is no need to manually improve generated code. However, there is little work on generating syntax-directed editors, debuggers or embeddings in programming environments.
The goal of the track is to bring together people from both disciplines, and to enfoster discussions between them. In particular submissions discussing both aspects, compilers and model-based code-generators are welcome. Topics might be for example: Type systems for DSLs and their implementation, editor generation using classical compiler technology, formal semantics of DSLs, correctness of model transformations and program transformations, case studies comparing model-based code generation and compilation.
Jens Knoop (TU Wien, Austria), Wolf Zimmermann (University of Halle-Wittenberg, Germany), Uwe Assmann (TU Dresden, Germany)
In the past decade, state-of-the-art Information and Communication Technology (ICT) has gained a strong standing in all aspects of healthcare provision and management. As already discussed in the ISoLA-Med workshop in Potsdam in June 2009 and in this track at ISoLA 2012, a set of innovative research topics related to the future of healthcare systems hinge on the notion of simplicity as a driving paradigm in ICT development, maintenance and use. We believe that the philosophy of consistently applied simplicity is strategically important to make advanced healthcare provision accessible ad keep it affordable to patients and to society, yet the discipline of simplicity is still poorly understood and rarely systematically applied.
Current hot issues that will shape the competitiveness of the European ICT in the next few decades and which require investigation from the perspective of simplicity in IT at the networked system level revolve around the notion of simplicity and its elevation to a design paradigm including:
- Balancing IT-aspirations with user demands: How to bridge the widening gap between software engineers and front-end clients.
- From sophisticated to smart technologies: User empowerment through simplicity, manageability, adaptability, robustness, and target group focus.
- Handing-over IT power to the co-value creating customers: Users as process designers, owners and change agents.
- Competing for the future: Sketching-out viable IT-roadmaps for multiple strategy.
J. Mündler (3M-Deutschland), T. Margaria (Univ. Potsdam, Germany), C. Rasche (Univ. Potsdam, Germany)