Research challanges
Our first challenge is to rethink the whole process of system design:
Research Challenge 1. How do we redefine the old steps of software engineering process into a totally new software and system design process able to cope with "eternal" systems that are going to change?
In order to provide high assurance, verification and model-driven testing proved to be extremely strong tools in the development process of high-assurance systems. The classical one consists in building a formal model of the system and then in using a theorem-prover or a model-checker to verify its properties. Automatic generation of source code from the model could then be used to guarantee that implemented code is coherent with the model. Optimisation phases of the source code could be used but this manual manipulation weakens the verification process. Adding flexibility on top of optimization weakens even more the confidence in the process.
As a consequence, a significant effort has also gone into testing and in particular Model-Based Testing (MBT) which simplifies the production of test sequences, by possibly automating the process of test generation, based on coverage criteria that increase the confidence in the system. The cost of employing a MBT approach is principally divided between the process of designing a model of the system, and the writing of test concretization layers. The use of models provides an abstraction wrt. actual systems that help the evolution: In the context of an evolving software lifecycle, MBT reduces the effort of testing phase as the test concretization layer can be reused for free for testing before and after evolution.
Security and change are the key issues in the software engineering process for evolving systems. We know today how to provide security or flexibility taken in isolation. Unfortunately, when considered together, we will find ourselves in the opposite corners of the following diagram:
We can use full fledged verification for providing a high-level of assurance to fairly static requirements or we can provide flexible software, developed in weeks using agile methodologies, but without any assurance.
Research Challenge 2. How do we provide some level of assurance for something that is going to change? Can we change the way verification and testing are performed in such a way that changes in the systems will not completely destroy the assurance that we built?
As IT systems are becoming an essential infrastructure for businesses and administration their longevity is a key aspect for their cost effectiveness. A key to longevity of software is a complex configuration management that allows for flexible adaptation of applications to the evolving technologies and requirements. Unfortunately, this makes configuration management an increasingly complex and difficult task, because security and privacy requirements that had to be taken into account during the implementation of the applications.
Some changes may not be predictable; in that case we need methods and tools that facilitate updating the assessments according to how the system is changed. For example, a method that flags those parts of the already existing assessment that remains valid after a change to the system.
Research Challenge 3. How do we assess the security of a system that is going to change? How do we redefine novel risk assessment methodologies that can tell us how to cope with change?
Evolving systems must be updated by loading new code. If the systems are not called back, this code is mobile and may be corrupted. Assessment is intrinsically related to the operational context of a system. Some of the past major failures in software assessment were related to a limited or incorrect understanding of the system operational context. If the system evolves or if its operational context evolves, privacy and security properties must be verified in order to ensure that the previous security of the system is maintained.
Research Challenge 4. How can we provide a portable or home system with verification capability on-device so that even if a long way (in time or space) has passed since last connecting to its producer it can still ascertain where a software update is any good?
To achieve this challenge, new code must be either certified or verified by the system itself. Verification is a very powerful activity that is often too costly for small systems such as smart cards. On the other hand, this is the only activity that allows a system to check the code it loads without relying on a third party. To be performed by autonomous devices, in an automated way with limited overheads, today's embedded verification algorithms are not generic and are dedicated to security policies defined for the system. Here, we need to propose some generic verification capabilities to the system, using techniques such as proof-carrying code or model-carrying code considering resources of various targeted systems.