2020 is not only the product of a fivefold 404 but also the big year of online conferences. While conferences were previously held in particularly conspicuous or unusual locations and invited people to join the physical network, this year online platforms in particular serve as a necessary compromise.
One of the conferences I was especially looking forward to are the Open Source Automation Days of the Munich company ATIX AG.
The event took place from 19.10 to 21.10 – the first day was reserved for workshops, the last two days consisted of lectures.
Like last year, the lecture program was extended to two days. On first optional workshop day, participants could register for one of the following workshops:
- KubeOne 101: Learn how to use Operator driven Cluster Lifecycle Tool
- Ansible Advanced
- GitLab CI/CD – from zero to hero
- Kubernetes Best Practices and GitOps-Basics
- Host Deployment and initial configuration with Foreman / orcharhino
The presentations were again divided into strategy and technology tracks – this year the change of room was only a mouse click away. The selection of topics with 28 lectures and 2 keynotes was as usual varied and interesting:
|Day 1||Day 2|
|Peaceful Distributed Microservice Architecture||Mission Possible – Microsoft mit OpenSource-Tools meistern||Automation of the legacy and new world with orcharhino||Code to Kubernetes: Deployment shouldn’t be an afterthought|
|Aufbau einer Edge Computing Platform||GIT Entwicklungs Workflows – von Simple GIT zu GIT Flow DevOps||Automate your OS and hypervisor patching to stay secure||Automation pipeline to build Chromebooks for enterprises|
|Developing an Open Source Culture around more than just the Code||GitOps – Kubernetes The Easy Way||When DevOps goes wrong||Servermanagementinfrastuktur, ein technischer Überblick|
|Automation: Forget the technology, focus on the culture!||Systemverwaltung mit Uyuni||The path to OpenSource DBaaS with Kubernetes||Top Trends in IT Automation|
|opsi 4.2: Unser Weg zu Microservices und wieder zurück||Configuration templating vs configuration as code, 12th round||From Containers to Kubernetes Operators||Ist Cloud doch zu langsam für legacy Software?|
|orcharhino und orcharhino Proxies für komplexe Netze||Erfolgreiches Ertrinken in Logdateien||PostgreSQL: Open Source Datenbank in hochkritischen Unternehmenslandschaften||Distributed community in a virtual world-connected or disconnected?|
|Driving Multi-cloud and Hybrid Cloud with Hitachi Kubernetes Service||Automating the Management of Kubernetes Applications with Ansible||Developers love CI/CD: The Sec and Ops sequel||Power-Up With Cloud And Tekton Pipelines|
The fourth iteration of the conference had the largest attendance so far with 160 conference and 20 workshop participants. 9 partners supported the event.
Mark Hlawatschek (CEO ATIX) introduced the event with an outline about automation and Open Source culture. According to him, bringing people together has never been more important than it is today – an absolutely correct conclusion in view of the Corona pandemic and omnipresent home office activities. Also from inside the company there was a preview of the orcharhino 5.5.0 news which will be released at the time of the conference. This is now based on Foreman 2.1 and Katello 3.16. Oracle ULN for package downloads is now supported alongside VMware vSphere 7.0.
A second keynote (“The New Virtual Normal”) by Oliver Rössling thematically followed up on not entirely voluntary conferences. Thus, besides the usual tools and netiquettes of constant online appointments, technological innovations were also presented. Spatial.io brings collaborative work home using Argumented Reality, while Matterport enables 360° scans of complex buildings and virtual reality technology makes it possible to “walk” through the office from the living room at home. Those who do not own such a VR headset could at least navigate virtually through the real office with the help of a device from Double Robotics. Technologically all of them are interesting solutions that could become more tempting, especially if the pandemic lasts for several years, says Rößling. I personally hope never to have to use any of the mentioned gadgets. 🙂
Tobi Knaup (Co-CEO, D2IQ) spoke in the keynote of the second day about the benefits of open source philosophies within organizations. Thus, the larger community leads to faster and more qualitative developments compared to proprietary software. Companies can accelerate their product growth by adapting open source processes and maintaining open cultures, says Knaup.
Martin Alfke (CEO, example42 GmbH) showed in his presentation some practical examples for versioning puppet code in Git repositories. Concretely, some workflow examples with advantages and disadvantages were presented. Two essential components of the workflows were the branching concept and CI/CD pipelines to exchange code between environments. In one of the workflows, Cherry Picking in Git was deliberately used instead of merges to exchange only the necessary amount of code between repositories and environments.
Melanie Corr (Community Manager for Foreman and Pulp) reported on maintaining open source communities. For example, it is important in a community to exchange not only code, but also tests and documentation. Corr (who long maintained Red Hat Satellite’s documentation) said that a community’s entry barriers must be as low as possible to be successful. It is also important to enable and view any kind of contribution. There are currently plans to standardize Foreman, Katello and Red Hat Satellite documentation. Technically, AsciiDoc is used here, among others.
Erol Ülükmen (managing director at uib GmbH) reported about the development of opsi 4.2. Above all it was about the modernized architecture. So now the web framework Starlet is used and WSGI was replaced by ASGI. The application is in principle executable in containers and Kubernetes, the Dependency management is now independent of the respective Linux distributor. Redis is used for the central monitoring of performance data. In test scenarios, the in-memory database delivered up to 100,000 queries per second at approx. 1ms latency; Grafana is used for visualization. Worker processes now react 250% faster, WebDav even 1000%. At the same time, system requirements could also be reduced. So a scenario with 200 locations and 4.000 clients now needs only one VM (8 CPUs, 16 GB RAM) instead of 5 as before. opsi 4.2 will probably be released at the end of the year. Subsequent versions will optimize the backend data structure and web interface.
Dr. Josef Spillner (Head of Distributed Application Computing Paradigms & Lecturer, Zurich University of Applied Sciences) compared some common frameworks for centralized logging regarding their strengths and weaknesses. So there is with Logzip from Hauwei an algorithm with a very high compression rate, but searching is not possible. As part of a research on more efficient and powerful logging algorithms, streamblast is a promising prototype that claims to be particularly efficient.
Timothy Appnel (Senior Product Manager, Red Hat) demonstrated how to manage Kubernetes applications with Ansible Regardless of Ansible, every shell command and UI interaction is an opportunity for automation, Appnel says. With the Ansible-Collections kubernetes.code and community.okd there are Ansible modules to manage Kubernetes and OpenShift applications. The former is even officially supported by Red Hat since the last Ansiblefest. The numerous modules can be used to control various aspects of the infrastructure and application. In addition there is also Kubernets Operator to enable the management of complex applications.
Fritz Weinhappl (Sales Consultant, Oracle) presented with Ksplice an offer of the database manufacturer for kernel updates without subsequent restarts. Ksplice exists since 2008 and uses similar to kGraft (SUSE) and kpatch (Red Hat) the corresponding infrastructure of the Linux kernel to exchange modules at runtime. Besides the own Oracle Linux, Red Hat Enterprise Linux, Ubuntu and Fedora are also supported. The first two can be tested free of charge for 30 days, the desktop distributions can be patched for free after registration.
Oracle claims to have installed 150 million meltdown patches in just 4 hours thanks to Ksplice. Unfortunately there is no effort to release or document the tooling. Red Hat and SUSE also offer Livepatch services for a fee, but disclose their tools and procedures so that the community can adapt.
In his presentation “From Containers to Kubernetes Operators” Philipp Krenn (Team Lead at Elastic) talked about typical errors in the implementation and daily administration of Kubernetes clusters. A common mistake when deploying applications is to select the wrong container tag. Version pinning is essential and consequently Elastic does not offer a latest tag. When deploying, it is also important to ensure that helmet charts are used instead of native deployments.
Borys Neselovskyi demonstrated the added value of the PostgreSQL database in highly critical enterprise environments. In particular, various architectures for critical enterprise landscapes and PostgreSQL scenarios were presented. The graphical Postgres Enterprise Manager based on pgAdmin is offered as a supplement.
Last but not least, Michael Friedrich reported in his lecture “Developers love CI/CD: The Sec and Ops sequel” about
Drageekeksi new developments in CI/CD pipelines as well as practical tips for optimizing them. Due to ever increasing speed to market figures, companies are always required to implement short feedback rounds and faster adaptations. For this, meaningful and efficient pipelines are of immense importance.
To design such a system, meaningful and reproducible unit tests and granular job definitions are necessary. A pipeline should break down as quickly as possible in the event of an error and not cause unnecessary infrastructure costs. This can be achieved by minimalistic artifact configurations (e.g. caching between stages) and optimized docker images (size, software selection). Dependencies of individual jobs can be displayed graphically thanks to GitLab’s new DAG (Directed Acyclic Graph) functionality. Minimum and maximum pipeline run times help to detect faulty runs. Thanks to CI/CD analytics latest trends indicate impending quality problems. A REST API can be used in conjunction with Prometheus, Icinga, and other monitoring frameworks to alert developers to anomalies as early as possible. Thanks to security scanning, libraries can be examined for known security vulnerabilities – but also falsely deposited credentials can be detected.
I enjoyed the event again. Despite suboptimal conditions, ATIX AG managed again to set up an exciting conference – even if the usual relaxed networking after work hours did not take place due to the pandemic. Especially the workshop day is a useful addition to the conference program. This allows you to “lock yourself in” on the first day and dedicate yourself with full concentration to one topic before the following two days are dominated by many topics.
I am already looking forward to the next event – hopefully under familiar circumstances. 🙂