Category Archives: HyVar

HyVar analysis tool for support SPL developing

Implementing an SPL with the HyVar toolchain consists in:

  1. Defining a feature model (as a feature diagram with cross tree constraints and context information variables);
  2. Coding a Yakindu statechart representing the base program;
  3. Identifying the deltas that modify the core statechart together with the application order and the activation conditions to generate the variants;
  4. Coding the deltas (this step may require to go back to step 3).

One of the most challenging problem lying under deltas development is keeping track of each element across the base statechart and the deltas that manipulated it to generate the variants, in order to ensure that each variant can be generated and is a well-formed Yakindu statechart.

Doing this by trying to keep all in mind can lead to many undesired bugs! Bugs can be grouped in:

  • an ill-typed expression (e.g., a string value is assigned to an integer declared variable);
  • a delta applicability inconsistency (i.e., during the generation of a variant, a delta operation fails, so the generation of the variant fails);
  • a dependency inconsistency (i.e., an incomplete variant is generated).

Even if one or more of them don’t affect the variant derivation process, they produce a problematic variant. While it is straightforward to check the well-formedness of a variant by opening it in the proper IDE, in the HyVar continuous integration scenario, checking the variants by generating each variant and checking it in isolation will significantly reduce the efficiency of development process. Namely, the number of variants can grow as the size of the power set of the set of features in the worst case.

Therefore the HyVar toolchain provides a static analysis tool able to check whether all variants can be generated and are correct Yakindu statecharts,  without generating any variant.

The analysis requires that all deltas must be coded following a programming pattern that we call “pre-well-formedness”—if the deltas do not follow this pattern the analysis fails. To explain what pre-well-formedness means, we first divide delta operations in three groups: the ones that add an element, remove an element, modify an element. A pre-well-formed delta cannot perform two operations of different kind on the same element. Here some cases:

  • an already added element can’t be modified (either removed) in the same delta;
  • an already modified element can’t be removed (either added) in the same delta;
  • an already removed element can’t be added (either modified) in the same delta.

While following pre-well-formedness may looks like pretty easy, at times a delta operation can hide some modifications (e.g. the addition of a transition implies the modification of its source state, so that a delta adding a state cannot add a transition from that state in order to be pre-well-formed).

Tool support for helping the programmer to write pre-well-formed deltas, by highlighting the deltas not following the pattern, is currently under development.

Plug and Play Variability for Eclipse

Delta modeling is an approach to structured reuse within software product lines. Delta modules manifest changes associated with different configurations in realization artifacts, such as source code, by adding, modifying or removing affected elements. However, a dedicated delta language is required for each realization language, e.g., DeltaJava for Java.

DeltaEcore is a tool suite for swift creation of delta languages that can seamlessly be integrated into the provided variant derivation procedure of a software product line. Its main constituents are a use-friendly graphical editor for feature models, a powerful configurator to allow valid selections of features and their versions as well as a variant derivation procedure to allow creation of specific products of a software product line.

The project HyVar utilizes DeltaEcore to model variability in Hyper-Feature Models, define a delta language for Yakindu and generate variants from state charts. Furthermore, HyVar extends the tool suite to make it suitable for capturing variability in time in Temporal Feature Models (TFMs), to allow easy and flexible reconfiguration by interfacing with HyVarRec and to make product lines context-aware through models specifically tailored to modeling context and its evolution.

The consequent model-based implementation of the HyVar tool chain allows for traceability of artifacts and changes on them, so that analyzes can identify and reduce potential errors in the complex overall structure of a software product line. Hence, the development on DeltaEcore and the research in HyVar are symbiotic in that they create an enhanced expressiveness and increased quality in software product line development along with an improved user experience through dedicated editors.

Microservices and Docker for geographically distributed projects

The HyVar project, as with most Horizon 2020 projects, has partners from several European countries. While this is good for enabling cooperation across borders and opening up for new international collaborations, geographically distributed projects come with several challenges. One of the key issues is pulling the R&D results from each partner into a cohesive deliverable where the whole is greater than the sum of its parts.

In HyVar, the partners represent both academia and industry. Travel budgets are finite and thus the opportunities to sit down together and make things work are limited. The main output is a hybrid variability toolchain, deployed on a scalable cloud infrastructure. Each partner is responsible for their piece of the toolchain puzzle, be it a DSVL cross compiler or hybrid product reconfiguration. It is a complex pipeline where each component is critical. When face-to-face time is scarce, how can we make sure that all the pieces fit together in the end? Moreover, we do not have one implementation partner but rather many. Different partners have different approaches to software engineering; this further complicates the task of final integration.


One of our first design choices was to think of the HyVar toolchain as a collection of microservices. Microservices is an implementation strategy where a software system is constructed as a collection of independent services, each of them easily deployable, cohesive and without any coupling with other services. All services are accessed through a lightweight, open communication protocol; in our case through RESTful web APIs and JSON messages. Microservices can be considered as a specialisation of the classic service-oriented architecture (SOA) paradigm.

Why is formulating the toolchain as a collection of microservices a useful strategy? First of all, it forced us to think modularly about the whole pipeline: What are the tasks of the different components, what information do they need to perform their duty, and what are the minimum viable services. Moreover, we had to make sure that there was no shared state, which in general is a good thing. Once we had a common, agreed-upon specification, each partner could work independently without getting into discussions on implementation details. As long as each service is accessible through a web API, the toolkits and programming languages used do not matter much. This allowed each partner to use the tools and languages they felt most comfortable with to achieve their goals. As long as the specification and API is adhered to, the components should work together. Accordingly, the integration risk is greatly reduced.

Another benefit of a microservice architecture is that it provides more flexibility when it comes to scaling the system on the cloud. Different services have different resource needs; some may require more aggressive scaling approaches. As an example, we have already identified our final binary compilation service as being particularly resource-intensive, thus requiring extra scaling and caching attention. Also, the lack of shared state eases the scaling task and has benefits on both a theoretical (our work on modelling scalable cloud services) and practical (actually making it scale) level. Being able to scale the services independently of each other makes us a lot more confident that the final product can work in real-life scenarios and not just with our test cases.

Finally, the microservice approach makes the toolchain easier to extend and modify, not the least in comparison with a classic, monolithic software system. For non-automotive domains, customer requirements may be different. Being able to shuffle services around and easily add new services makes the toolchain a lot more flexible. For instance, some clients may be wary of running the entire toolchain on a public cloud infrastructure and may want to mix in legacy components running on their existing servers. With a service-oriented approach, this is a feasible task.

Did microservices solve all of our integration challenges? In practice, there were more hurdles to overcome, especially for deployment. Having a microservice work on your computer or in the test lab is a good start, but the HyVar toolchain is meant to be running on the cloud. The cloud is not a clearly defined entity: there are multiple public cloud providers with different platforms and also various private cloud architectures. What we gained by allowing each partner implementation flexibility could easily be lost by having to make each service, with their different dependencies, work on any given cloud infrastructure. For this we turned to Docker.



Docker is a virtualization technology that recently has seen a lot of traction and usage. With Docker you bundle your software—in our case each individual microservice—with all the libraries, tools and dependencies it needs to function. This is known as a Docker container. Such containers are more lightweight than traditional virtual machines, and it is not uncommon to run several concurrent containers on e.g. the same Linux instance. Unlike virtual machines, a Docker container does not include an entire operating system.

A Docker container can be considered an instance of a Docker image. To build an image, you start with a base layer image. A Dockerfile is written; this file contains a set of instructions similar to a shell script that specifies how your image should be provisioned. Once the image is created it is used to launch a container. The image can freely be shared and is completely standalone.

While useful for scaling and encapsulation, for us the real utility in Docker lies in its “build once, run anywhere” philosophy: if your container works locally it is guaranteed to work just the same on any Docker-compatible system, be it on the cloud or anywhere else. When the operating environment is no longer a critical component, the tasks of integration and deployment become simpler. Returning to the challenges of HyVar being a geographically distributed project, this enabled each partner to work in isolation on their components and then distribute them to the rest of the project without having to worry about whether the other partners could use them or not. As long as the Dockerfile and the necessary supporting files are there, it will work on any system that supports Docker, such as e.g. Amazon Web Services, Microsoft Azure or your own laptop. Furthermore, it became possible to choose supporting technology freely without having to worry about whether it would work in production or not. As long as the Dockerfile compiles and the launched container fulfills the API specifications, this was as strong a guarantee as any that the delivered service would be deployable.


In conclusion, microservices and Docker have proved very useful when putting together the pieces of a complex project such as HyVar. Integration and deployment are critical parts of distributed projects which Docker has helped make more manageable. We have still had our share of integration issues but they have mostly been related to interface omissions or bugs in the services themselves. There are also downsides to service-oriented architectures and containerisation, one of the major being debugging when something goes wrong. Having to dig through multiple layers of virtualization does make bug finding more difficult, so putting some additional thought into logging and traceability is clearly beneficial. Nonetheless, so far our container-and-microservice approach has definitely helped keep the HyVar project on track.

Workshop on Feature-Oriented Software Development (FOSD)

Part of the HyVar consortium is responsible for organizing the Workshop on Feature-Oriented Software Development (FOSD). Find it here:


Feature orientation is an emerging paradigm of software development. It supports the automatic generation of large-scale software systems from a set of units of functionality, called features. The key idea of feature-oriented software development (FOSD) is to explicitly represent similarities and differences of a family of software systems for a given application domain (e.g., database systems, banking software, text processing systems) with the goal of reusing software artifacts among the family members. Features distinguish different members of the family by their variable parts. A feature is a unit of functionality that satisfies a requirement, represents a design decision, and provides a potential configuration option. A challenge in FOSD is that a feature does not map cleanly to an isolated module of code. Rather, it may affect (“cut across”) many components/artifacts of a software system. Furthermore, the decomposition of a software system into its features gives rise to a combinatorial explosion of possible feature combinations and interactions. Research on FOSD has shown that the concept of features pervades all phases of the software life cycle and requires a proper treatment in terms of analysis, design, and programming techniques, methods, languages, and tools, as well as formalisms and theory.



horizon2020-300x88HyVar Architecture

HyVar addresses continuous software evolution in distributed systems by proposing a framework for hybrid variability. The framework combines:

  • Domain specific variability language to describe evolution as software product line.
  • Scalable cloud infrastructure for monitoring and individualized customization of software upgrades for the remote devices.
  • Over-the-air upgrade technologies.

For more details see the synopsis.