Cloud-native applications are increasingly composed of microservices, where each service is implemented and packaged according to a specific artefact technology. Common technologies include cloud functions (e.g. AWS Lambda), Docker containers, Helm charts, Kubernetes operators, OpenShift templates, and plain old language-specific packages (e.g. JAR/WAR). Mixed-technology compositions and corresponding orchestration languages are emerging and might be commonplace by 2020.
To achieve high-quality applications, developers need to go beyond simple linting of these artefacts. According to published studies, due to the polyglot nature of microservices, there are many undiscovered inconsistencies not just in the code, but also in the metadata and configuration as well as in the orchestration documents.
Researchers in Zurich have developed a first set of tools including HelmQA to help developers spotting inconsistencies in local files, on GitHub, or on specific artefact repositories (e.g. Operatorhub.io, KubeApps Hub). These tools use timeseries representations and machine learning algorithms to track quality statistics over time. Through statistical reports as snapshots of the timeseries, auto-generated advice on fixing, and visualisation of potential issues, software testers will find their place in the cloud-native world. Through CI/CD integration, developers can even avoid faulty commits in the first place and ensure that their handcrafted artefacts comply not only syntactically with the rules, but also adhere to quality expectations. Through the use of a graphical dashboards, quality analysts and data scientists can quickly find regressions and drill down to problematic artefacts, including in dependencies.
This tutorial starts with an introduction on technologies to build executable microservices. It then points out typical consistency, quality and security issues. Finally, it demonstrates the use of quality assessment tools and their CI/CD integration.
Background: In mid-2018 the involved researchers started monitoring Helm charts, Lambda functions and other microservice artefacts in a long-term study. As they observe trends and metrics in the cloud-native ecosystem, they have found several instances of quality degradations due to incorrect and inconsistent metadata. Helm charts may for example reference no longer existing URLs, omit maintainer contact information, or incorporate old dependencies. Kubernetes deployment descriptors may include labels with typos in them which are not easy to spot with manual checks. Through production deployments, they demonstrated that the tools created in the research lab environment help solving the problem. Consequently, they would like to discuss how this advanced checking functionality can be integrated into existing checks (e.g. technology-specific linting) and how a currently emerging global observatory for microservice artefacts can be exploited to increase the accuracy of quality reports.
Josef Spillner ist Leiter des Service Prototyping Lab und Dozent an der Zürcher Hochschule für Angewandte Wissenschaften.
Er hat ein klassisches Informatik-Studium mit anschliessender Promotion und Habilitation an der TU Dresden durchlaufen. Seit 2015 unterstützt er Unternehmen in der Schweiz durch angewandte Forschungs- und Innovationsprojekte im Themenfeld der Erstellung und Optimierung von Cloud-Anwendungen.
Seine besondere Expertise liegt in cloud-nativen Anwendungsarchitekturen und serverlosen Anwendungen in hybriden Cloud-Umgebungen mit hohem Qualitäts- und Automatisierungsanspruch. Wissenschaftliche Publikationen und eingeladene Fachvorträge in Unternehmen runden sein Portfolio ab.