<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Cyconet Blog]]></title><description><![CDATA[Remember, a Jedi can feel the Force flowing through him. Then we’ll go with that data file! As you wish. Robot 1-X, save my friends! And Zoidberg!]]></description><link>https://devlog.cyconet.org/</link><generator>Ghost 5.67</generator><lastBuildDate>Sun, 08 Oct 2023 06:44:31 GMT</lastBuildDate><atom:link href="https://devlog.cyconet.org/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[DevOps Camp 2019]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>Das <a href="https://devops-camp.de/?ref=devlog.cyconet.org">DevOps Camp</a> ist wieder einmal Geschichte und das nat&#xFC;rlich viel zu schnell. Die Veranstaltung ist als <a href="https://de.wikipedia.org/wiki/Barcamp?ref=devlog.cyconet.org">BarCamp</a> angelegt, genauere Informationen findet man <a href="https://devops-camp.de/das-devops-camp/?ref=devlog.cyconet.org">hier</a>.</p>
<p>Nachdem vorher nur grob ein Themenrahmen gesteckt wird, findet wie immer eine kurze Vorstellungsrunde, diesmal mit sensationellen 200 Teilnehmern, statt und dann die Sessionplanung.</p>]]></description><link>https://devlog.cyconet.org/2019/04/30/dvoc19/</link><guid isPermaLink="false">5cd80c7992418c0001247422</guid><category><![CDATA[Community]]></category><category><![CDATA[Container]]></category><category><![CDATA[Docker]]></category><category><![CDATA[hacking]]></category><category><![CDATA[DevOps]]></category><category><![CDATA[Barcamp]]></category><dc:creator><![CDATA[Jan Wagner]]></dc:creator><pubDate>Tue, 30 Apr 2019 14:43:19 GMT</pubDate><media:content url="https://devlog.cyconet.org/content/images/2019/04/dvoc_big-icon.png" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://devlog.cyconet.org/content/images/2019/04/dvoc_big-icon.png" alt="DevOps Camp 2019"><p>Das <a href="https://devops-camp.de/?ref=devlog.cyconet.org">DevOps Camp</a> ist wieder einmal Geschichte und das nat&#xFC;rlich viel zu schnell. Die Veranstaltung ist als <a href="https://de.wikipedia.org/wiki/Barcamp?ref=devlog.cyconet.org">BarCamp</a> angelegt, genauere Informationen findet man <a href="https://devops-camp.de/das-devops-camp/?ref=devlog.cyconet.org">hier</a>.</p>
<p>Nachdem vorher nur grob ein Themenrahmen gesteckt wird, findet wie immer eine kurze Vorstellungsrunde, diesmal mit sensationellen 200 Teilnehmern, statt und dann die Sessionplanung. Es gibt sogar eine <a href="https://mobile.twitter.com/i/broadcasts/1YqGojzyEXZKv?ref=devlog.cyconet.org">Aufzeichnung</a> eines Teils der Planung vom Sonntag Morgen.<br>
<img src="https://devlog.cyconet.org/content/images/2019/04/IMG_3151.png" alt="DevOps Camp 2019" loading="lazy"><br>
Das Ergebnis der Planung wird nat&#xFC;rlich am Ende online <a href="https://devops-camp.de/app/sessionplan/?ref=devlog.cyconet.org">ver&#xF6;ffentlicht</a>.</p>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><p>Folgende Sessions habe ich Sonnabend besucht.</p>
<h2 id="podmanwiefunktioniertrootless">Podman &#x2013; wie funktioniert rootless</h2>
<blockquote>
<p><a href="https://podman.io/?ref=devlog.cyconet.org">Podman</a> is a daemonless container engine for developing, managing, and running OCI Containers on your Linux System. Containers can either be run as root or in rootless mode. Simply put: <code>alias docker=podman</code>.</p>
</blockquote>
<p>Soweit ich mich erinnere, wird auf <a href="https://en.wikipedia.org/wiki/Slirp?ref=devlog.cyconet.org">Slirp</a>, genauer gesagt auf <a href="https://github.com/rootless-containers/slirp4netns?ref=devlog.cyconet.org">slirp4netns</a> zur&#xFC;ck gegriffen, um das Problem mit unpriviligiertem network namespace zu umgehen. Leider wirft Podman neue Probleme auf. Allerdings scheinen auch laut dem Beitrag <a href="https://medium.com/@tonistiigi/experimenting-with-rootless-docker-416c9ad8c0d6?ref=devlog.cyconet.org">Experimenting with Rootless Docker</a> Bestrebungen diesbez&#xFC;glich bei Docker zu bestehen.</p>
<h2 id="culturehacking"><a href="http://culturehacker.com.au/about/?ref=devlog.cyconet.org">Culture Hacking</a></h2>
<blockquote>
<p><a href="https://www.meetup.com/de-DE/Culture-Hacking-Nurnberg/?ref=devlog.cyconet.org">Wir w&#xFC;nschen uns Organisationen, die ein Umfeld bieten, damit Arbeit zum Spiel wird und Menschen aus Begeisterung ihre Talente einbringen. Ein Rahmen der Wertsch&#xE4;tzung muss geschaffen werden, in dem die Mitarbeiter nicht durch Belohnung und Bestrafung funktionieren, sondern aus eigener Verantwortung an Sinnhaftem teilhaben wollen und sich kontinuierlich weiterentwickeln k&#xF6;nnen.</a></p>
</blockquote>
<p>Es ging im Groben darum, wie man es mit kleinen Schritten schaffen kann, diesen Kulturwandel anzuregen. <a href="https://mobile.twitter.com/udowiegaertner?ref=devlog.cyconet.org">@udowiegaertner</a> hat erst einleitend erz&#xE4;hlt, was er (und seine konspirativen Kollegen) alles schon so &quot;angestellt&quot; hat. Dann haben wir in kleinen Gruppen &#xFC;berlegt, was man noch so alles tun kann. Bei den Vorschl&#xE4;gen war von hinterh&#xE4;ltig bis saugeil alles dabei.<br>
<img src="https://devlog.cyconet.org/content/images/2019/04/IMG_3124.png" alt="DevOps Camp 2019" loading="lazy"><img src="https://devlog.cyconet.org/content/images/2019/04/IMG_3130.png" alt="DevOps Camp 2019" loading="lazy"></p>
<h2 id="13cloudyyearscloudcontainerconfusion">13 cloudy years, Cloud, Container &amp; Confusion</h2>
<p>Eine Session von <a href="https://twitter.com/FrankPrechtel?ref=devlog.cyconet.org">@FrankPrechtel</a> mit einem kurzen Abriss der technologischen Entwicklungen im Kontext &quot;Cloud&quot; der letzten Jahre. Eigentlich ist es vollkommen egal, wor&#xFC;ber Frank spricht, es verspricht immer interessant und kurzweilig zu sein. Leider war der Raum schon fr&#xFC;hzeitig so voll, dass ich mich gezwungen sah, Socializing zu betreiben. Daf&#xFC;r hat es auf Twitter gl&#xFC;cklicherweise die <a href="https://twitter.com/PingfishTwit/status/1122124892860817408?ref=devlog.cyconet.org">Sketch-Notes</a> durch meine Timeline gesp&#xFC;lt.<br>
<img src="https://devlog.cyconet.org/content/images/2019/04/6125AF57-78F5-41FE-8494-339086C214B6.JPG" alt="DevOps Camp 2019" loading="lazy"></p>
<h2 id="kubernetesgettingstarted">Kubernetes getting started</h2>
<p>Urspr&#xFC;nglich als ASK-Session geplant, wurde dann ein Vortrag von einem Kollegen der Noris daraus, welcher sich um die theoretischen (Grundlagen-)Konzepte drehte und aus der H&#xFC;fte geschossen war. Ein sehr runder Vortrag in relativ kurzer Zeit. Sehr gut fand ich, dass es &#xFC;ber das &#xFC;bliche Blahblah zu Pods, Master, Nodes etc. hinaus auch kurz anhand eines sehr aufschlussreichen Blogpostes <a href="https://medium.com/jorgeacetozi/kubernetes-master-components-etcd-api-server-controller-manager-and-scheduler-3a0179fc8186?ref=devlog.cyconet.org">Kubernetes Master Components: Etcd, API Server, Controller Manager, and Scheduler</a> von Jorge Acetozi auf die Arbeitsweise im Zusammenhang des <a href="https://medium.com/jorgeacetozi/kubernetes-master-components-etcd-api-server-controller-manager-and-scheduler-3a0179fc8186?ref=devlog.cyconet.org#65c1">API Server</a> eingegangen wurde.<br>
<img src="https://devlog.cyconet.org/content/images/2019/04/IMG_3134.png" alt="DevOps Camp 2019" loading="lazy"></p>
<h2 id="askpersistenzdockerkuberneteserfahrungsaustausch">Ask: Persistenz Docker/Kubernetes &#x2013; Erfahrungsaustausch</h2>
<p>Kleine Runde zu der Herausforderung &quot;Persistentes Storage&quot; in Containerumgebungen. Anscheinend immer noch ein riesiges Problem in Container-Umgebungen on premise. Richtig schwierig scheinen Storage-Provider zu sein, die &quot;read-write by many nodes&quot; unterst&#xFC;tzen.</p>
<p>Wir haben hierzu folgende Matrix festgehalten:</p>
<table>
<thead>
<tr>
<th>Driver</th>
<th>RWO</th>
<th>RWX</th>
</tr>
</thead>
<tbody>
<tr>
<td>NFS</td>
<td>[x]</td>
<td>[x]</td>
</tr>
<tr>
<td>Ceph</td>
<td>[x]</td>
<td></td>
</tr>
<tr>
<td>VMDK</td>
<td>[x]</td>
<td></td>
</tr>
<tr>
<td>Cinder</td>
<td>[x]</td>
<td></td>
</tr>
<tr>
<td>NetApp</td>
<td>[x]</td>
<td></td>
</tr>
<tr>
<td>Trident</td>
<td>[x]</td>
<td></td>
</tr>
<tr>
<td>Portworx</td>
<td>[x]</td>
<td></td>
</tr>
</tbody>
</table>
<p>NFS scheint wohl der g&#xE4;ngige Workaround zu sein, soll wohl aber in Kubernetes deprecated zu sein/werden. Google selbst verwendet wohl auch NFS in der GKE f&#xFC;r RWX.<br>
Grunds&#xE4;tzlich m&#xF6;chte man anscheinend aber versuchen so viel wie m&#xF6;glich zu vereinheitlichen, idealerweise verwendet man einen Stogare-Provider der sich immer gleich anf&#xFC;hlt (und nicht in jedem Projekt was anderes). Ceph scheint auch ein heisser Kandidat zu sein.</p>
<p>Unabh&#xE4;ngig davon scheinen gute Quellen die <a href="https://docs.openshift.com/container-platform/3.11/architecture/additional_concepts/storage.html?ref=devlog.cyconet.org#types-of-persistent-volumes">OpenShift Persistent Storage</a> und <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/?ref=devlog.cyconet.org#access-modes">Kubernetes Persistent Volumes Access Modes</a> Dokumentation zu diesem Thema zu sein.</p>
<h2 id="overcookedfhrungsstileteamsmitspielkonsoleausprobieren">Overcooked &#x2013; F&#xFC;hrungsstile + Teams mit Spielkonsole ausprobieren</h2>
<p>Wieder eine Session von <a href="https://mobile.twitter.com/udowiegaertner?ref=devlog.cyconet.org">@udowiegaertner</a>! Nachdem ich von der ersten schon sehr angetan war, musste ich da nat&#xFC;rlich unbedingt hin.</p>
<p>Grob ging es darum, dass man ein Teamverhalten auch wunderbar in einem Spiel erforschen und beobachten kann, in dem darum geht, Aufgaben zu erledigen. Dies wurde anhand von <a href="https://en.wikipedia.org/wiki/Overcooked?ref=devlog.cyconet.org">Overcooked!</a> nach einer kurzen thematischen Einf&#xFC;hrung mit einer Versuchsgruppe von 4 Personen ausprobiert. Hierbei wurden neben den Spielanforderungen auch noch vom Spielleiter in den jeweiligen Spielrunden zus&#xE4;tzliche Randbedingungen definiert, wie beispielsweise, es darf keine Strafpunkte geben, es soll in Zweierteams gearbeitet werden oder mitten im Spiel scheidet ein Mitarbieter wegen &quot;Elternzeit&quot; aus und wird  durch einen neuen Mitarbeiter ersetzt.</p>
<p>Hier konnte man recht gut sehen, dass sich durch bestimmte Massnahmen, Konstellationen oder unvorhergesehene Ereignisse das Ergebnis durchaus sehr stark ver&#xE4;ndert. Analogien zum Teamverhalten in Firmen waren offensichtlich und so konnte man sich aus Sicht des &quot;Mitarbeiter&quot; und aber auch aus Sicht des Managers bzw. Consultant, welches als ganzes Consultant-Team in Form der restlichen Session-Teilnehmer vorhanden war, ein recht gutes Bild machen, was es f&#xFC;r Faktoren in Teams gibt, die das Ergebnis durchaus massgeblich beeinflussen.</p>
<p><img src="https://devlog.cyconet.org/content/images/2019/04/IMG_3143.png" alt="DevOps Camp 2019" loading="lazy"></p>
<p>Hier nochmal das Scoreboard mit den jeweiligen Team-Eigenschaften und den &quot;Aha-Effekten&quot;.</p>
<p><img src="https://devlog.cyconet.org/content/images/2019/04/IMG_3144.png" alt="DevOps Camp 2019" loading="lazy"></p>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><p>Im Anschluss an die Sessions am Samstag war Networking und Socializing angesagt. Ich erspare Euch die #Foodporn Fotos, kommt das n&#xE4;chste Mal einfach vorbei.<br>
<img src="https://devlog.cyconet.org/content/images/2019/04/3166.jpg" alt="DevOps Camp 2019" loading="lazy"><br>
<img src="https://devlog.cyconet.org/content/images/2019/04/3165.jpg" alt="DevOps Camp 2019" loading="lazy"><br>
<img src="https://devlog.cyconet.org/content/images/2019/04/3164.jpg" alt="DevOps Camp 2019" loading="lazy"><br>
Am etwas sp&#xE4;teren Abend hat sich dann ganz spontan noch eine Lightning-Beer-Talk Session aufgetan. Wir waren leider schon auf dem Weg ins Hotel und noch kurz in einer andere Trinkhalle. Ich habe aber nur extrem gutes Feedback bekommen ... ich hoffe dies wird im n&#xE4;chsten Jahr offiziell in die Agenda aufgenommen!<br>
<img src="https://devlog.cyconet.org/content/images/2019/04/3163.jpg" alt="DevOps Camp 2019" loading="lazy"><br>
<img src="https://devlog.cyconet.org/content/images/2019/04/3162.jpg" alt="DevOps Camp 2019" loading="lazy"></p>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><p>Sonntag morgen ging es tats&#xE4;chlich etwas z&#xE4;her los, sowohl bei mir als auch auf dem Camp selbst. Nichtsdestotrotz begann alles mit einer St&#xE4;rkung und wieder mit der &#xFC;blichen Session-Planung.</p>
<h2 id="karriereinderit">Karriere in der IT</h2>
<p>Eine Session von Ingo. Tja, was soll ich sagen ... auch bei Ingo ist jede Session aufschlussreich und mindestens interessant.<br>
Im Grunde ging es darum, warum in der IT aus welchen Gr&#xFC;nden Menschen in gewisse Positionen kommen. Welchen Sinn und Unsinn Titel haben und welche Mechanismen da eine Rolle spielen, war sehr inspirierend.</p>
<h2 id="vonftpbisaws">Von FTP bis AWS</h2>
<p><a href="https://twitter.com/mattagohni?ref=devlog.cyconet.org">@mattagohni</a> hat uns von der schrittweisen Entwicklung div. L&#xF6;sungen bei solutionDrive erz&#xE4;hlt. Wie hat man fr&#xFC;her Probleme erledigt (ja, FTP) und wohin hat man sich jetzt bewegt mit Terrorform, Packer, AWS vielen anderen Dingen rund um dieses &quot;neumodische Hipster-Zeuch&quot;.</p>
<h2 id="wohinsolldiereisegehenentwicklungsmglichkeitengeneralisierungvsspezifikation">Wohin soll die Reise gehen, Entwicklungsm&#xF6;glichkeiten, Generalisierung vs Spezifikation</h2>
<p>Wieder <a href="https://twitter.com/FrankPrechtel?ref=devlog.cyconet.org">@FrankPrechtel</a>, diesmal habe ich die CCC-Strategie angewandt und mich schon mal in die Session davor gesetzt, um dann auch Platz in der Session zu haben, in die ich will. :)<br>
Inhaltlich ging es darum, was man tun kann, um am (Arbeits-)Markt relevant zu bleiben. Z.B. T-Shape versuchen etc.</p>
<h2 id="cicddeploymentswithgitlabci">CI/CD Deployments with GitLab-CI</h2>
<p><a href="https://twitter.com/DrSlow?ref=devlog.cyconet.org">DrSlow</a> und <a href="https://twitter.com/behufe?ref=devlog.cyconet.org">behu</a> von <a href="https://www.de.paessler.com/?ref=devlog.cyconet.org">Paessler</a> haben uns an ihren Erfahrungen in Bezug auf Installation und Operations von Kubernetes teilhaben lassen.<br>
Sehr bemerkenswert war, dass hier auch nach aussen offen mit einer Fehlerkultur umgegangen wurde und es bei bei Paessler auch m&#xF6;glich war, ein Projekt, welches sehr weit fortgeschritten ist, nochmals komplett &#xFC;ber den Haufen zu werfen, weil man merkte, man ist in einer Sackgasse gelandet.<br>
Ich zitiere hier kurz das <a href="https://www.de.paessler.com/company/career/culture-deck?ref=devlog.cyconet.org">Paessler Culture Deck</a>:</p>
<blockquote>
<p>Wenn wir nie versagen, haben wir uns nicht genug vorgenommen.</p>
</blockquote>
<p>Bei Paessler wird anscheinend aktuell &#xFC;ber das Kubernets-Setup haupts&#xE4;chlich <a href="https://en.wikipedia.org/wiki/CI/CD?ref=devlog.cyconet.org">CI/CD</a> workload gefahren, konkret &#xFC;ber <a href="https://about.gitlab.com/product/continuous-integration/?ref=devlog.cyconet.org">Gitlab-CI</a>. Die Anforderung nach persistenten Storage scheint nur in geringem Ma&#xDF;e vorhanden und aufgrund der haupts&#xE4;chlichen Nutzung als CI/CD ist auch <a href="https://helm.sh/docs/glossary/?ref=devlog.cyconet.org#tiller">Tiller</a> nicht notwendig, was urspr&#xFC;nglich einiges an Schwerzen verursacht hat.</p>
<p><img src="https://devlog.cyconet.org/content/images/2019/04/3154.jpg" alt="DevOps Camp 2019" loading="lazy"></p>
<p>Das aktuelle Setup wird wohl mit <a href="https://rancher.com/docs/rancher/v2.x/en/?ref=devlog.cyconet.org">Rancher 2</a> als Kubernetes-Orchestrator verwendet und der Anspruch ist, Kubernets den Inhouse-Kunden weitestgehend als Self-Service anzubieten.</p>
<p><img src="https://devlog.cyconet.org/content/images/2019/04/3155.jpg" alt="DevOps Camp 2019" loading="lazy"></p>
<p>Eine sehr geile Session, wo auch nicht gespart wurde mit &apos;lessions learned&apos; und &apos;please don&apos;t do&apos;</p>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><p>Alles in allem war es wieder ein sehr sch&#xF6;nes, entspanntes aber auch interessantes, kurzweiliges Barcamp. Ich freue mich schon auf das DevOps Camp compact im Herbst!</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[HAProxy - a journey into multithreading (and SSL)]]></title><description><![CDATA[<p>I&apos;m running some load balancers which are using <a href="http://www.haproxy.org/?ref=devlog.cyconet.org">HAProxy</a> to distribute <a href="https://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol?ref=devlog.cyconet.org">HTTP</a> traffic to multiple systems.</p><p>While using <a href="https://www.haproxy.com/de/blog/haproxy/haproxy-and-ssl/?ref=devlog.cyconet.org">SSL with HAProxy</a> is possible since some time, it wasn&apos;t in the early days. So we decided for some customers, which was in need to provide encryption, to</p>]]></description><link>https://devlog.cyconet.org/2019/03/20/haproxy-using-multi-threads-and/</link><guid isPermaLink="false">5cd80c7992418c0001247421</guid><category><![CDATA[Apache]]></category><category><![CDATA[Debian]]></category><category><![CDATA[HighAvailability]]></category><category><![CDATA[Linux]]></category><category><![CDATA[Planet]]></category><dc:creator><![CDATA[Jan Wagner]]></dc:creator><pubDate>Wed, 20 Mar 2019 18:39:05 GMT</pubDate><media:content url="https://devlog.cyconet.org/content/images/2019/03/haproxy-weblogo.png" medium="image"/><content:encoded><![CDATA[<img src="https://devlog.cyconet.org/content/images/2019/03/haproxy-weblogo.png" alt="HAProxy - a journey into multithreading (and SSL)"><p>I&apos;m running some load balancers which are using <a href="http://www.haproxy.org/?ref=devlog.cyconet.org">HAProxy</a> to distribute <a href="https://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol?ref=devlog.cyconet.org">HTTP</a> traffic to multiple systems.</p><p>While using <a href="https://www.haproxy.com/de/blog/haproxy/haproxy-and-ssl/?ref=devlog.cyconet.org">SSL with HAProxy</a> is possible since some time, it wasn&apos;t in the early days. So we decided for some customers, which was in need to provide encryption, to offload it with <a href="https://httpd.apache.org/?ref=devlog.cyconet.org">Apache</a>.<br>When later HAProxy got <a href="https://www.haproxy.com/de/blog/how-to-get-ssl-with-haproxy-getting-rid-of-stunnel-stud-nginx-or-pound/?ref=devlog.cyconet.org">added SSL support</a> this also had benefits when keeping this setup for larger sites, because HAProxy had a single process model and doing encryption is indeed way more resource consuming.<br>Still using Apache for <a href="https://en.wikipedia.org/wiki/TLS_acceleration?ref=devlog.cyconet.org">SSL offloading</a> was a good choice because it comes with the Multi-Processing Modules <a href="https://httpd.apache.org/docs/2.4/en/mod/worker.html?ref=devlog.cyconet.org">worker</a> and <a href="https://httpd.apache.org/docs/2.4/de/mod/event.html?ref=devlog.cyconet.org">event</a> that are threading capable. We did choose the event mpm cause it should deal better with the &apos;<a href="https://en.wikipedia.org/wiki/HTTP_persistent_connection?ref=devlog.cyconet.org#Keepalive_with_chunked_transfer_encoding">keep alive problem</a>&apos; in HTTP. So far so good.</p><p>Last year some large setups started to suffer accepting new connections out of the blue. Unfortunately I found nothing in the logs and also couldn&apos;t reproduce this behaviour. After some time I decided to try using another Apache mpm and switched over to the worker model. And guess what ... the connection issues vanished.<br>Some days later I surprisingly learned about the Apache Bug in Debian BTS &quot;<a href="https://bugs.debian.org/902493?ref=devlog.cyconet.org">Event MPM listener thread may get blocked by SSL shutdowns</a>&quot; which was an exact description of my problem.</p><!--kg-card-begin: markdown--><p>While being back in safe waters I thought it would be good to have a look into HAProxy again and learned that <a href="https://www.haproxy.com/blog/multithreading-in-haproxy/?ref=devlog.cyconet.org">threading support was added</a> in version 1.8 and in 1.9 got some <a href="https://www.haproxy.com/blog/haproxy-1-9-has-arrived/?ref=devlog.cyconet.org">more improvements</a>.<br>
So we started to look into it on a system with a couple of real CPUs:</p>
<pre><code># grep processor /proc/cpuinfo | tail -1
processor	: 19
</code></pre>
<p>At first we needed to install a newer version of HAProxy, since 1.8.x is available via <a href="https://backports.debian.org/?ref=devlog.cyconet.org">backports</a> but 1.9.x is only available via <a href="https://haproxy.debian.net/?ref=devlog.cyconet.org">haproxy.debian.net</a>. I thought I should start with a simple configuration and keep 2 spare CPUs for other tasks:</p>
<pre><code>global
        # one process
        nbproc 1
        # 18 threads
        nbthread 18
        # mapped to the first 18 CPU cores
        cpu-map auto:1/1-18 0-17
</code></pre>
<p>Now let&apos;s start:</p>
<pre><code># haproxy -c -V -f /etc/haproxy/haproxy.cfg
# service haproxy reload
# pstree haproxy
No processes found.
# grep &quot;worker #1&quot; /var/log/haproxy.log | tail -2
Mar 20 13:06:51 lb13 haproxy[22156]: [NOTICE] 078/130651 (22156) : New worker #1 (22157) forked
Mar 20 13:06:51 lb13 haproxy[22156]: [ALERT] 078/130651 (22156) : Current worker #1 (22157) exited with code 139 (Segmentation fault)
</code></pre>
<p>Okay .. cool! ;) So I started lowering the used CPUs since without threading I did not experiencing segfaults. With 17 threads it seems to be better:</p>
<pre><code># service haproxy restart
# pstree haproxy
haproxy---16*[{haproxy}]
# grep &quot;worker #1&quot; /var/log/haproxy.log | tail -2
Mar 20 13:06:51 lb13 haproxy[22156]: [ALERT] 078/130651 (22156) : Current worker #1 (22157) exited with code 139 (Segmentation fault)
Mar 20 13:14:33 lb13 haproxy[27001]: [NOTICE] 078/131433 (27001) : New worker #1 (27002) forked
</code></pre>
<p>Now I started to move traffic from Apache to HAProxy slowly and watching logs carefully. With shifting more and more traffic over, the amount of <code>SSL handshake failure</code> entries went up. While there was the possibility this were just some clients not supporting our ciphers and/or TLS versions I had some doubts, but our own monitoring was unsuspicious. So I started to have a look on external monitoring and after some time I cought some interesting errors:</p>
<pre><code>error:14077438:SSL routines:SSL23_GET_SERVER_HELLO:tlsv1 alert internal error
error:140943FC:SSL routines:ssl3_read_bytes:sslv3 alert bad record mac
</code></pre>
<p>The last time I had issues I lowered the thread count so I did this again. And you might guessed it already, this worked out. With 12 threads I had no issues anymore:</p>
<pre><code>global
        # one process
        nbproc 1
        # 12 threads
        nbthread 12
        # mapped to the first 12 CPU cores (with more then 17 cpus haproxy segfaults, with 16 cpus we have a high rate of ssl errors)
        cpu-map auto:1/1-12 0-11
</code></pre>
<p>So we got rid of SSL offloading and the proxy on localhost, with the downside that HAProxy is failing 1/146 <a href="https://github.com/summerwind/h2spec?ref=devlog.cyconet.org">h2spec</a> test, which is a conformance testing tool for <a href="https://en.wikipedia.org/wiki/HTTP/2?ref=devlog.cyconet.org">HTTP/2</a> implementation, where Apache was failing not a single test.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Comparing (OVH) I/O performance]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>Since some time I&apos;m using cloud resources provided by <a href="http://www.ovh.com/?ref=devlog.cyconet.org">OVH</a> for some projects I&apos;m involved.</p>
<p>Recently we decided to give <a href="https://zammad.org/?ref=devlog.cyconet.org">Zammad</a>, an opensource support/ticketing solution, a try. We did choose the <a href="https://docs.zammad.org/en/latest/install-docker-compose.html?ref=devlog.cyconet.org">docker compose way</a> for deployment, which also includes an <a href="https://en.wikipedia.org/wiki/Elasticsearch?ref=devlog.cyconet.org">elasticsearch</a> instance. The important part</p>]]></description><link>https://devlog.cyconet.org/2018/03/05/comparing-ovh-i-o-performance/</link><guid isPermaLink="false">5cd80c7992418c0001247420</guid><category><![CDATA[Docker]]></category><category><![CDATA[Container]]></category><category><![CDATA[OpenSource]]></category><category><![CDATA[Planet]]></category><category><![CDATA[Debian]]></category><dc:creator><![CDATA[Jan Wagner]]></dc:creator><pubDate>Mon, 05 Mar 2018 08:25:00 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>Since some time I&apos;m using cloud resources provided by <a href="http://www.ovh.com/?ref=devlog.cyconet.org">OVH</a> for some projects I&apos;m involved.</p>
<p>Recently we decided to give <a href="https://zammad.org/?ref=devlog.cyconet.org">Zammad</a>, an opensource support/ticketing solution, a try. We did choose the <a href="https://docs.zammad.org/en/latest/install-docker-compose.html?ref=devlog.cyconet.org">docker compose way</a> for deployment, which also includes an <a href="https://en.wikipedia.org/wiki/Elasticsearch?ref=devlog.cyconet.org">elasticsearch</a> instance. The important part of this information is, that for elasticsearch indexing the storage has a <a href="https://www.elastic.co/blog/performance-considerations-elasticsearch-indexing?ref=devlog.cyconet.org">huge impact</a>.</p>
<p>The documentation suggests at least 4 GB RAM for running the Zammad compose stack. So I did choose a <a href="https://www.ovh.de/virtual_server/vps-cloud.xml?ref=devlog.cyconet.org">VPS Cloud 2</a>, it has 4 GB RAM and 50 GB <a href="https://en.wikipedia.org/wiki/Ceph_(software)?ref=devlog.cyconet.org">Ceph</a> storage, out of mind.</p>
<p>After I deployed my <a href="https://devlog.cyconet.org/2018/02/28/deploying-a-docker-container-system/">simple docker setup</a> and on top the zammad compose setup everything was running smooth mostly. Unfortunately when starting the whole zammad compose stack, elasticsearch is regenerating the whole index, which might take a long(er) time depending on the size of the index and the performance of the system. This has to be done before the UI comes available and is ready for using.</p>
<p>To make a long story short, I had the same setup on a testground where it was several times faster then on the production setup. So I decided it&apos;s time to have a look into the performance of my OVH resources. Over the time I got access to a couple of them, even some bare metal systems.</p>
<p>For my test I just grabed the following sample:</p>
<ul>
<li>VPS 2016 Cloud 2</li>
<li>VPS-SSD-3</li>
<li>VPS 2016 Cloud RAM 1</li>
<li>VPS 2014 Cloud 3</li>
<li>HG-7</li>
<li>SP-32 (that&apos;s a bare metal with software raid)</li>
</ul>
<p>Looking into what would be the best way to benchmark I/O it came to my attention, that comparing I/O for cloud resources is not so <a href="https://dzone.com/articles/iops-benchmarking-disk-io-aws-vs-digitalocean?ref=devlog.cyconet.org">uncommon</a>. I also learned that <code>dd</code> might not be the first choice but <a href="https://github.com/axboe/fio?ref=devlog.cyconet.org"><code>fio</code></a> seems a good catch for doing lazy I/O benchmarks and <a href="https://github.com/koct9i/ioping?ref=devlog.cyconet.org"><code>ioping</code></a> for testing I/O latency.</p>
<p>As the systems all running Debian, at least 8.x, I used the following command(s) for doing my tests:</p>
<pre><code class="language-bash">aptitude -y install -o quiet=2 ioping fio &gt; /dev/null; &amp;&amp; \
 time fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --output=/tmp/tempfile --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75; \
 rm -f test.*; cat /tmp/tempfile; \
 ioping -c 10 /root | tail -4
</code></pre>
<p>The output on my VPS 2016 Cloud 2 system:</p>
<pre><code class="language-bash">Jobs: 1 (f=1): [m(1)] [100.0% done] [1529KB/580KB/0KB /s] [382/145/0 iops] [eta 00m:00s]
real	14m20.420s
user	0m14.620s
sys	1m4.424s
test: (g=0): rw=randrw, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=64
fio-2.16
Starting 1 process
test: Laying out IO file(s) (1 file(s) / 4096MB)

test: (groupid=0, jobs=1): err= 0: pid=19377: Fri Mar  2 18:16:12 2018
  read : io=3070.4MB, bw=3888.9KB/s, iops=972, runt=808475msec
  write: io=1025.8MB, bw=1299.2KB/s, iops=324, runt=808475msec
  cpu          : usr=1.43%, sys=6.34%, ctx=835077, majf=0, minf=9
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, &gt;=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, &gt;=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, &gt;=64=0.0%
     issued    : total=r=785996/w=262580/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: io=3070.4MB, aggrb=3888KB/s, minb=3888KB/s, maxb=3888KB/s, mint=808475msec, maxt=808475msec
  WRITE: io=1025.8MB, aggrb=1299KB/s, minb=1299KB/s, maxb=1299KB/s, mint=808475msec, maxt=808475msec

Disk stats (read/write):
  sda: ios=787390/263575, merge=612/721, ticks=49277288/2701580, in_queue=51980604, util=100.00%
--- /root (ext4 /dev/sda1) ioping statistics ---
9 requests completed in 4.56 ms, 36 KiB read, 1.97 k iops, 7.71 MiB/s
generated 10 requests in 9.00 s, 40 KiB, 1 iops, 4.44 KiB/s
min/avg/max/mdev = 423.4 us / 506.8 us / 577.3 us / 43.7 us
</code></pre>
<p>The interesting parts:</p>
<pre><code>  read : io=3070.4MB, bw=3888.9KB/s, iops=972, runt=808475msec
  write: io=1025.8MB, bw=1299.2KB/s, iops=324, runt=808475msec
</code></pre>
<pre><code>min/avg/max/mdev = 423.4 us / 506.8 us / 577.3 us / 43.7 us
</code></pre>
<p>After comparing the results with the rest of the systems I think my samples of the VPS 2016 Cloud instances do not convince me that I would choose such a system for use cases where I/O might be a critical part.</p>
<p><img src="https://devlog.cyconet.org/content/images/2018/03/runtime.png" alt="runtime" loading="lazy"></p>
<p><img src="https://devlog.cyconet.org/content/images/2018/03/bandwidth_read.png" alt="bandwidth_read" loading="lazy"></p>
<p><img src="https://devlog.cyconet.org/content/images/2018/03/bandwidth_write.png" alt="bandwidth_write" loading="lazy"></p>
<p><img src="https://devlog.cyconet.org/content/images/2018/03/iops_write.png" alt="iops_write" loading="lazy"></p>
<p><img src="https://devlog.cyconet.org/content/images/2018/03/iops_read.png" alt="iops_read" loading="lazy"></p>
<p><img src="https://devlog.cyconet.org/content/images/2018/03/latenvy_avg.png" alt="latenvy_avg" loading="lazy"></p>
<p><img src="https://devlog.cyconet.org/content/images/2018/03/latency_max.png" alt="latency_max" loading="lazy"></p>
<p><img src="https://devlog.cyconet.org/content/images/2018/03/latency_min.png" alt="latency_min" loading="lazy"></p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Deploying a (simple) docker container system]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>When a small platform for shipping containers is needed, not speaking about <a href="https://en.wikipedia.org/wiki/Kubernetes?ref=devlog.cyconet.org">Kubernets</a> or something, you have a couple of common things you might want to deploy at first.</p>
<p>Usual things that I have to roll out everytime deloying such a platform:</p>
<ul>
<li><a href="https://en.wikipedia.org/wiki/Docker_(software)?ref=devlog.cyconet.org">docker</a></li>
<li><a href="https://docs.docker.com/compose/overview/?ref=devlog.cyconet.org">docker-compose</a></li>
<li><a href="https://github.com/v2tec/watchtower?ref=devlog.cyconet.org">Watchtower</a> - automatically updating and restarting</li></ul>]]></description><link>https://devlog.cyconet.org/2018/02/28/deploying-a-docker-container-system/</link><guid isPermaLink="false">5cd80c7992418c000124741f</guid><category><![CDATA[Docker]]></category><category><![CDATA[HighAvailability]]></category><category><![CDATA[Container]]></category><category><![CDATA[selfnote]]></category><category><![CDATA[OpenSource]]></category><category><![CDATA[Planet]]></category><dc:creator><![CDATA[Jan Wagner]]></dc:creator><pubDate>Wed, 28 Feb 2018 08:54:00 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>When a small platform for shipping containers is needed, not speaking about <a href="https://en.wikipedia.org/wiki/Kubernetes?ref=devlog.cyconet.org">Kubernets</a> or something, you have a couple of common things you might want to deploy at first.</p>
<p>Usual things that I have to roll out everytime deloying such a platform:</p>
<ul>
<li><a href="https://en.wikipedia.org/wiki/Docker_(software)?ref=devlog.cyconet.org">docker</a></li>
<li><a href="https://docs.docker.com/compose/overview/?ref=devlog.cyconet.org">docker-compose</a></li>
<li><a href="https://github.com/v2tec/watchtower?ref=devlog.cyconet.org">Watchtower</a> - automatically updating and restarting containers</li>
<li><a href="https://traefik.io/?ref=devlog.cyconet.org">Tr&#xE6;fik</a> - modern HTTP reverse proxy and load balancer</li>
</ul>
<h2 id="bootstrapingdockeranddockercompose">Bootstraping docker and docker-compose</h2>
<p>Most services are build upon multiple containers. A useful tool for doing this is for example docker-compose where you can describe your whole &apos;application&apos;. So we need to deploy it beside docker itself.</p>
<script src="https://gist.github.com/waja/01ba2641f93f461044f9.js?file=docker_deploy.sh"></script>
<h2 id="deployingwatchtower">Deploying Watchtower</h2>
<p>An essential operational part is to keep you container images up to date.</p>
<blockquote>
<p>Watchtower is an application that will monitor your running Docker containers and watch for changes to the images that those containers were originally started from. If watchtower detects that an image has changed, it will automatically restart the container using the new image.</p>
</blockquote>
<script src="https://gist.github.com/waja/68853e25aa0f3b3ce46020f60ca2599c.js?file=deploy_watchtower.sh"></script>
<h2 id="deployinghttpsreverseproxytrfik">Deploying http(s) reverse proxy Tr&#xE6;fik</h2>
<p>If you want to provide multiple (web)services on port 80 and 443, you have to think about how this should be solved. Usually you would use a http(s) <a href="https://en.wikipedia.org/wiki/Reverse_proxy?ref=devlog.cyconet.org">reverse proxy</a>, there are many of software implementations available.<br>
The challenging part in such an environment is that services may appear and disappear frequently. (Re)-configuration of the proxy service it the gap that needs to be closed.</p>
<blockquote>
<p>Tr&#xE6;fik (pronounced like traffic) is a modern HTTP reverse proxy and load balancer made to deploy microservices with ease [...] to manage its configuration automatically and dynamically.</p>
</blockquote>
<p>Tr&#xE6;fik has many interesting <a href="https://docs.traefik.io/?ref=devlog.cyconet.org#features">features</a> for example &apos;Let&apos;s Encrypt support (Automatic HTTPS with renewal)&apos;.</p>
<script src="https://gist.github.com/waja/37202007b10837a7fc2e6eacacd9b335.js?file=deploy_traefik.sh"></script><!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Migrating Gitlab non-packaged PostgreSQL into omnibus-packaged]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>With the release of <a href="https://about.gitlab.com/2016/12/22/gitlab-8-15-released/?ref=devlog.cyconet.org">Gitlab 8.15</a> it was announced that <a href="https://en.wikipedia.org/wiki/PostgreSQL?ref=devlog.cyconet.org">PostgreSQL</a> needs to be <a href="https://about.gitlab.com/2016/12/22/gitlab-8-15-released/?ref=devlog.cyconet.org#postgresql-version-upgrade">upgraded</a>. As I migrated from a <a href="https://gitlab.com/gitlab-org/gitlab-ce/blob/master/doc/install/installation.md?ref=devlog.cyconet.org">source installation</a> I used to have an <a href="https://github.com/gitlabhq/omnibus-gitlab/blob/ae4bbbb563e745238731a6860b684def1298c78b/doc/settings/database.md?ref=devlog.cyconet.org#using-a-non-packaged-postgresql-database-management-server">external</a> PostgreSQL database instead of using the one shiped with the <a href="https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/README.md?ref=devlog.cyconet.org">omnibus package</a>.<br>
So I decided to do the data migration into</p>]]></description><link>https://devlog.cyconet.org/2017/01/18/migrating-gitlab-non-packaged-postgresql-into-omnibus-packaged/</link><guid isPermaLink="false">5cd80c7992418c000124741d</guid><category><![CDATA[Planet]]></category><category><![CDATA[OpenSource]]></category><category><![CDATA[git]]></category><dc:creator><![CDATA[Jan Wagner]]></dc:creator><pubDate>Wed, 18 Jan 2017 21:08:02 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>With the release of <a href="https://about.gitlab.com/2016/12/22/gitlab-8-15-released/?ref=devlog.cyconet.org">Gitlab 8.15</a> it was announced that <a href="https://en.wikipedia.org/wiki/PostgreSQL?ref=devlog.cyconet.org">PostgreSQL</a> needs to be <a href="https://about.gitlab.com/2016/12/22/gitlab-8-15-released/?ref=devlog.cyconet.org#postgresql-version-upgrade">upgraded</a>. As I migrated from a <a href="https://gitlab.com/gitlab-org/gitlab-ce/blob/master/doc/install/installation.md?ref=devlog.cyconet.org">source installation</a> I used to have an <a href="https://github.com/gitlabhq/omnibus-gitlab/blob/ae4bbbb563e745238731a6860b684def1298c78b/doc/settings/database.md?ref=devlog.cyconet.org#using-a-non-packaged-postgresql-database-management-server">external</a> PostgreSQL database instead of using the one shiped with the <a href="https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/README.md?ref=devlog.cyconet.org">omnibus package</a>.<br>
So I decided to do the data migration into the omnibus PostgreSQL database now which I skipped before.</p>
<p>Let&apos;s have a look into the databases:</p>
<pre><code>$ sudo -u postgres psql -d template1
psql (9.2.18)
Type &quot;help&quot; for help.

gitlabhq_production=# \l
                                             List of databases
         Name          |       Owner       | Encoding | Collate |  Ctype  |        Access privileges
-----------------------+-------------------+----------+---------+---------+---------------------------------
 gitlabhq_production   | git               | UTF8     | C.UTF-8 | C.UTF-8 |
 gitlab_mattermost     | git               | UTF8     | C.UTF-8 | C.UTF-8 |
gitlabhq_production=# \q
</code></pre>
<p>Dumping the databases and stop PostgreSQL. Maybe you need to adjust database names and users for your needs.</p>
<pre><code>$ su postgres -c &quot;pg_dump gitlabhq_production -f /tmp/gitlabhq_production.sql&quot; &amp;&amp; \
su postgres -c &quot;pg_dump gitlab_mattermost -f /tmp/gitlab_mattermost.sql&quot; &amp;&amp; \
/etc/init.d/postgresql stop
</code></pre>
<p>Activate PostgreSQL shipped with Gitlab Omnibus</p>
<pre><code>$ sed -i &quot;s/^postgresql\[&apos;enable&apos;\] = false/#postgresql\[&apos;enable&apos;\] = false/g&quot; /etc/gitlab/gitlab.rb &amp;&amp; \
sed -i &quot;s/^#mattermost\[&apos;enable&apos;\] = true/mattermost\[&apos;enable&apos;\] = true/&quot; /etc/gitlab/gitlab.rb &amp;&amp; \
gitlab-ctl reconfigure
</code></pre>
<p>Testing if the connection to the databases works</p>
<pre><code>$ su - git -c &quot;psql --username=gitlab  --dbname=gitlabhq_production --host=/var/opt/gitlab/postgresql/&quot;
psql (9.2.18)
Type &quot;help&quot; for help.

gitlabhq_production=# \q
$ su - git -c &quot;psql --username=gitlab  --dbname=mattermost_production --host=/var/opt/gitlab/postgresql/&quot;
psql (9.2.18)
Type &quot;help&quot; for help.

mattermost_production=# \q
</code></pre>
<p>Ensure pg_trgm extension is enabled</p>
<pre><code>$ sudo gitlab-psql -d gitlabhq_production -c &apos;CREATE EXTENSION IF NOT EXISTS &quot;pg_trgm&quot;;&apos;
$ sudo gitlab-psql -d mattermost_production -c &apos;CREATE EXTENSION IF NOT EXISTS &quot;pg_trgm&quot;;&apos;
</code></pre>
<p>Adjust permissions in the database dumps. Indeed please verify that users and databases might need to be adjusted too.</p>
<pre><code>$ sed -i &quot;s/OWNER TO git;/OWNER TO gitlab;/&quot; /tmp/gitlabhq_production.sql &amp;&amp; \
sed -i &quot;s/postgres;$/gitlab-psql;/&quot; /tmp/gitlabhq_production.sql
$ sed -i &quot;s/OWNER TO git;/OWNER TO gitlab_mattermost;/&quot; /tmp/gitlab_mattermost.sql &amp;&amp; \
sed -i &quot;s/postgres;$/gitlab-psql;/&quot; /tmp/gitlab_mattermost.sql
</code></pre>
<p>(Re)import the data</p>
<pre><code>$ sudo gitlab-psql -d gitlabhq_production -f /tmp/gitlabhq_production.sql
$ sudo gitlab-psql -d gitlabhq_production -c &apos;REVOKE ALL ON SCHEMA public FROM &quot;gitlab-psql&quot;;&apos; &amp;&amp; \
sudo gitlab-psql -d gitlabhq_production -c &apos;GRANT ALL ON SCHEMA public TO &quot;gitlab-psql&quot;;&apos;
$ sudo gitlab-psql -d mattermost_production -f /tmp/gitlab_mattermost.sql
$ sudo gitlab-psql -d mattermost_production -c &apos;REVOKE ALL ON SCHEMA public FROM &quot;gitlab-psql&quot;;&apos; &amp;&amp; \
sudo gitlab-psql -d mattermost_production -c &apos;GRANT ALL ON SCHEMA public TO &quot;gitlab-psql&quot;;&apos;
</code></pre>
<p>Make use of the shipped PostgreSQL</p>
<pre><code>$ sed -i &quot;s/^gitlab_rails\[&apos;db_/#gitlab_rails\[&apos;db_/&quot; /etc/gitlab/gitlab.rb &amp;&amp; \
sed -i &quot;s/^mattermost\[&apos;sql_/#mattermost\[&apos;sql_/&quot; /etc/gitlab/gitlab.rb &amp;&amp; \
gitlab-ctl reconfigure
</code></pre>
<p>Now you should be able to connect to all the Gitlab services again.</p>
<p>Optionally remove the external database</p>
<pre><code>apt-get remove postgresql postgresql-client postgresql-9.4 postgresql-client-9.4 postgresql-client-common postgresql-common
</code></pre>
<p>Maybe you also want to purge the old database content</p>
<pre><code>apt-get purge postgresql-9.4
</code></pre>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Container Orchestration Thoughts]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>Since some time everybody (read developer) want to run his new <a href="https://en.wikipedia.org/wiki/Microservices?ref=devlog.cyconet.org">microservice</a> stacks in <a href="https://en.wikipedia.org/wiki/Operating-system-level_virtualization?ref=devlog.cyconet.org">containers</a>. I can understand that building and testing an application is important for developers.<br>
One of the benefits of containers is, that developer (in theory) can put their new version of applications into production on their</p>]]></description><link>https://devlog.cyconet.org/2016/11/03/container-orchestration-thoughts/</link><guid isPermaLink="false">5cd80c7992418c000124741c</guid><category><![CDATA[Planet]]></category><category><![CDATA[OpenSource]]></category><category><![CDATA[Docker]]></category><category><![CDATA[HighAvailability]]></category><category><![CDATA[Container]]></category><dc:creator><![CDATA[Jan Wagner]]></dc:creator><pubDate>Thu, 03 Nov 2016 12:48:21 GMT</pubDate><media:content url="http://cdn.rancher.com/wp-content/uploads/2016/10/20100519/Kubernetes_Mesos_Swarm-300x87.png" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="http://cdn.rancher.com/wp-content/uploads/2016/10/20100519/Kubernetes_Mesos_Swarm-300x87.png" alt="Container Orchestration Thoughts"><p>Since some time everybody (read developer) want to run his new <a href="https://en.wikipedia.org/wiki/Microservices?ref=devlog.cyconet.org">microservice</a> stacks in <a href="https://en.wikipedia.org/wiki/Operating-system-level_virtualization?ref=devlog.cyconet.org">containers</a>. I can understand that building and testing an application is important for developers.<br>
One of the benefits of containers is, that developer (in theory) can put their new version of applications into production on their own. This is the point where operations is affected and operations needs to evaluate, if that might evolve into better workflow.</p>
<p>For yolo^Wdev<strong>Ops</strong> people there are some challenges that needs to be solved, or at least mitigated, when things needs to be done in large(r) scale.</p>
<ul>
<li>Which Orchestration Engine should be considered?</li>
<li>How to provide persistent (shared) storage?</li>
<li>How to update the base image(s) the apps are build upon and to test/deploy them?</li>
</ul>
<h2 id="orchestrationengine">Orchestration Engine</h2>
<p>Running <a href="https://en.wikipedia.org/wiki/Docker_(software)?ref=devlog.cyconet.org">Docker</a>, which is actual the most preferred container solution, on a single host with <a href="https://docs.docker.com/engine/reference/commandline/cli/?ref=devlog.cyconet.org"><code>docker</code> command line</a> client is something you can do, but there you leave the gap between <a href="https://en.wikipedia.org/wiki/Software_development?ref=devlog.cyconet.org">dev</a> and <a href="https://en.wikipedia.org/wiki/Information_technology_operations?ref=devlog.cyconet.org">ops</a>.</p>
<h3 id="uifordocker">UI For Docker</h3>
<p>Since some time there is <a href="https://github.com/kevana/ui-for-docker?ref=devlog.cyconet.org">UI For Docker</a> available for visualizing and managing containers on a single docker node. It&apos;s pretty awesome and the best feature so far is the Container Network view, which also shows the linked container.</p>
<p><img src="https://devlog.cyconet.org/content/images/2016/11/uifordocker_containers_network.png" alt="Container Orchestration Thoughts" loading="lazy"></p>
<h3 id="portainer">Portainer</h3>
<p><a href="https://github.com/portainer/portainer?ref=devlog.cyconet.org">Portainer</a> is pretty new and it can be <a href="http://portainer.io/install.html?ref=devlog.cyconet.org">deployed</a> as easy as UI For Docker. But the (first) great advantage: it can <a href="http://demo.portainer.io/?ref=devlog.cyconet.org#/swarm/">handle</a> <a href="https://docs.docker.com/swarm/?ref=devlog.cyconet.org">Docker Swarm</a>. Beside that it has many other great <a href="http://demo.portainer.io/?ref=devlog.cyconet.org#/swarm/">features</a>.</p>
<p><img src="https://devlog.cyconet.org/content/images/2016/11/portainer_container_list.png" alt="Container Orchestration Thoughts" loading="lazy"></p>
<h3 id="rancher">Rancher</h3>
<p><a href="http://rancher.com/rancher/?ref=devlog.cyconet.org">Rancher</a> describes themselves as &apos;container management platform&apos; that &apos;supports and manages all of your <a href="http://rancher.com/kubernetes?ref=devlog.cyconet.org">Kubernetes</a>, <a href="http://rancher.com/mesos?ref=devlog.cyconet.org">Mesos</a>, and <a href="http://rancher.com/swarm/?ref=devlog.cyconet.org">Swarm</a> clusters&apos;. This is great because this are all of the relevant docker cluster orchestrations at the market actually.</p>
<p><img src="https://devlog.cyconet.org/content/images/2016/11/rancher_infra_container.png" alt="Container Orchestration Thoughts" loading="lazy"></p>
<p>For the use cases, we are facing, <a href="https://en.wikipedia.org/wiki/Kubernetes?ref=devlog.cyconet.org">Kubernetes</a> and <a href="https://en.wikipedia.org/wiki/Apache_Mesos?ref=devlog.cyconet.org">Mesos</a> seems both like bloated beasts. <a href="https://twitter.com/usman_ismail?ref=devlog.cyconet.org">Usman Ismail</a> has written a really good <a href="http://rancher.com/comparing-rancher-orchestration-engine-options/?ref=devlog.cyconet.org">comparison</a> of Orchestration Engine options which goes into details.</p>
<p><img src="https://devlog.cyconet.org/content/images/2016/11/rancher_infra_hosts.png" alt="Container Orchestration Thoughts" loading="lazy"></p>
<h3 id="dockerswarm">Docker Swarm</h3>
<p>As there is actually no clear defacto standard/winner of the (container) orchestration wars, I would prevent to be in a vendor lock-in situation (yet). Docker swarm seems to be evolving and is getting more nice features other competitors doesn&apos;t provide.<br>
Due the native integration into the docker framework and great community I believe Docker Swarm will be the Docker Orchestration of the choice on the long run. This should be supported by Rancher 1.2 which is not released yet.<br>
From this point of view it looks very reasonable that Docker Swarm in combination with Rancher (1.2) might be a good strategy to maintain your container farms in the future.</p>
<p>If you think to put Docker Swarm into production in the actual state, I recommend to read <a href="https://medium.com/@panj/docker-swarm-mode-what-to-know-before-going-live-on-production-b6f60ffc5cd3?source=user_profile---------1-">Docker swarm mode: What to know before going live on production</a> by <a href="https://twitter.com/PanJ?ref=devlog.cyconet.org">Panjamapong Sermsawatsri</a>.</p>
<h2 id="persistentstorage">Persistent Storage</h2>
<p>While it is a best practice to use <a href="https://docs.docker.com/engine/tutorials/dockervolumes/?ref=devlog.cyconet.org#creating-and-mounting-a-data-volume-container">data volume container</a> these days, providing persistent storage across multiple hosts for shared volumes seems to be <a href="http://www.tricksofthetrades.net/2016/03/14/docker-data-volumes/?ref=devlog.cyconet.org#6-%E2%80%93-Volume-and-Data-Container-Issues">tricky</a>.</p>
<p>In theory you can <a href="https://docs.docker.com/engine/tutorials/dockervolumes/?ref=devlog.cyconet.org#/mount-a-shared-storage-volume-as-a-data-volume">mount a shared-storage volume as a data volume</a> and there are several <a href="https://docs.docker.com/engine/extend/legacy_plugins/?ref=devlog.cyconet.org#/volume-plugins">volume plugins</a> which supports shared storage.</p>
<p>For example you can use the <a href="https://github.com/rancher/convoy?ref=devlog.cyconet.org">convoy</a> plugin which gives you:</p>
<ul>
<li>thin provisioned volumes</li>
<li>snapshots of volumes</li>
<li>backup of snapshots</li>
<li>restore volumes</li>
</ul>
<p>As backend you can use:</p>
<ul>
<li>Device Mapper</li>
<li>Virtual File System(VFS)/Network File System(NFS)</li>
<li>Amazon Elastic Block Store(EBS)</li>
</ul>
<p>The good thing is, that <a href="http://rancher.com/introducing-convoy-a-docker-volume-driver-for-backup-and-recovery-of-persistent-data/?ref=devlog.cyconet.org">convoy is integrated into Rancher</a>. For more information I suggest to read <a href="http://rancher.com/setting-shared-volumes-convoy-nfs/?ref=devlog.cyconet.org">Setting Up Shared Volumes with Convoy-NFS</a>, which also mentions some limitations. If you want test Persistent Storage Service, Rancher provides some  <a href="http://docs.rancher.com/rancher/v1.2/en/rancher-services/storage-service/?ref=devlog.cyconet.org">documentation</a>.</p>
<p>Actually I did not evaluate shared-storage volumes yet, but I don&apos;t see a solution I would love to use in production (at least on-premise) without strong downsides. But maybe things will go further and there might be a great solution for this caveats in the future.</p>
<h2 id="keepingbaseimagesuptodate">Keeping base images up-to-date</h2>
<p>Since some time there are many projects that tries to detect security problems in your container images in <a href="https://blog.docker.com/2016/05/docker-security-scanning/?ref=devlog.cyconet.org">several</a> <a href="https://github.com/coreos/clair?ref=devlog.cyconet.org">ways</a>.<br>
Beside general <a href="https://www.sumologic.com/blog-devops/securing-docker-containers/?ref=devlog.cyconet.org">security considerations</a> you need to deal somehow with issues in your base images that you build your applications on.</p>
<p>Of course, even if you know you have a security issue in your application image, you need to fix it, which depends on the way how you based your application upon.</p>
<h3 id="waystobaseyourapplicationimage">Ways to base your application image</h3>
<ul>
<li>You can build your application image entire from scratch, which leaves all the work to your development team and I wouldn&apos;t recommend it that way.</li>
<li>You also can create one (or more) intermediate image(s) that will be used by your development team.</li>
<li>The development team might ground their work on images in <a href="https://mesosphere.com/blog/2015/10/14/docker-registries-the-good-the-bad-the-ugly/?ref=devlog.cyconet.org">public available</a> or private (for example the  one bundled to your <a href="https://about.gitlab.com/2016/05/23/gitlab-container-registry/?ref=devlog.cyconet.org">gitlab</a> CI/CD solution) registries.</li>
</ul>
<h4 id="whatsthestrugglewiththebaseimage">Whats the struggle with the base image?</h4>
<p>If you are using images being not (well) maintained by other people, you have to wait for them to fix your base image. Using external images might also lead into trust problems (can you trust those people in general?).<br>
In an ideal world, your developers have always fresh base images with fixed security issues. This can probably be done by rebuilding every intermediate image periodically or when the base image changes.</p>
<h3 id="paradigmchange">Paradigm change</h3>
<p>Anyway, if you have a new application image available (with no known security issues), you need to deploy it to production. This is summarized by Jason McKay in his article <a href="http://www.logicworks.net/blog/2016/04/docker-security-monitor-patch-containers-aws/?ref=devlog.cyconet.org">Docker Security: How to Monitor and Patch Containers in the Cloud</a>:</p>
<blockquote>
<p>To implement a patch, update the base image and then rebuild the application image. This will require systems and development teams to work closely together.</p>
</blockquote>
<p>So patching security issues in the container world changes workflow significant. In the old world operation teams mostly rolled security fixes for the base systems independent from development teams.<br>
Now hitting containers the production area this might change things significant.</p>
<h3 id="bringingupdatedimagestoproduction">Bringing updated images to production</h3>
<p>Imagine your development team doesn&apos;t work steady on a project, cause the product owner consider it feature complete. The base image is provided (in some way) consistently without security issues. The application image is build on top of that automatically on every update of the base image.<br>
How do you push in such a scenario the security fixes to production?</p>
<p>From my point of view you have two choices:</p>
<ul>
<li>Let the development team require to test the resulting application image and put it into production</li>
<li>Push the new application image without review by the development team into production</li>
</ul>
<p>The first scenario might lead into a significant delay until the fixes hit production created by the probably infrequent work of the development team.</p>
<p>The latter one brings your security fixes early to production by the notable higher risk to break your application. This risk can be reduced by implementing massive tests into CI/CD pipelines by the development team. <a href="https://docs.docker.com/engine/swarm/swarm-tutorial/rolling-update/?ref=devlog.cyconet.org">Rolling updates</a> provided by <a href="https://docker.github.io/engine/swarm/swarm-tutorial/?ref=devlog.cyconet.org">Docker Swarm</a> might also reduce the risk of ending with a broken application.</p>
<p>When you are implementing an update process of your (application) images to production, you should consider <a href="https://github.com/CenturyLinkLabs/watchtower?ref=devlog.cyconet.org">Watchtower</a> that provides <a href="https://www.ctl.io/developers/blog/post/watchtower-automatic-updates-for-docker-containers/?ref=devlog.cyconet.org">Automatic Updates for Docker Containers</a>.</p>
<h3 id="conclusion">Conclusion</h3>
<p>Not being a product owner or the operations part of an application that is facing a widely adopted usage that would compensate the actual tradeoffs we are still facing I tend not to move large scale production projects into a container environment.<br>
This means not that this might be a bad idea for others, but I&apos;d like to sort out some of the caveats before.</p>
<p>I&apos;m still interested to put smaller projects into production, being not scared to reimplement or move them on a new stack.<br>
For smaller projects with a small number of hosts Portainer looks not bad as well as Rancher with the <a href="https://github.com/rancher/cattle?ref=devlog.cyconet.org">Cattle</a> orchestration engine if you just want to manage a couple of nodes.</p>
<p>Things are going to be interesting if Rancher 1.2 supports Docker swarm cluster out of the box. Let&apos;s see what the future will bring us to the container world and how to make a great stack out of it.</p>
<h3 id="update">Update</h3>
<p>I suggest to read <a href="https://thehftguy.wordpress.com/2016/11/01/docker-in-production-an-history-of-failure/?ref=devlog.cyconet.org">Docker in Production: A History of Failure</a> and the answer <a href="https://patrobinson.github.io/2016/11/05/docker-in-production/?ref=devlog.cyconet.org">Docker in Production: A retort</a> to understand the actual challenges when running Docker in larger scale production environments.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Oxidized - silly attempt at (Really Awesome New Cisco confIg Differ)]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>Since ages I wanted have replaced this freaking backup solution of our Network Equipment based on some hacky shell scripts and <a href="https://en.wikipedia.org/wiki/Expect?ref=devlog.cyconet.org"><code>expect</code></a> uploading the configs on a <a href="https://en.wikipedia.org/wiki/Trivial_File_Transfer_Protocol?ref=devlog.cyconet.org">TFTP</a> server.</p>
<p>Years ago I stumbled upon <a href="https://en.wikipedia.org/wiki/RANCID_%28software%29?ref=devlog.cyconet.org">RANCID</a> (Really Awesome New Cisco confIg Differ) but had no time to implement it. Now I</p>]]></description><link>https://devlog.cyconet.org/2016/01/29/oxidized-silly-attempt-at-really-awesome-new-cisco-config-differ/</link><guid isPermaLink="false">5cd80c7992418c000124741b</guid><category><![CDATA[Planet]]></category><category><![CDATA[OpenSource]]></category><category><![CDATA[Debian]]></category><category><![CDATA[Networking]]></category><category><![CDATA[Packaging]]></category><category><![CDATA[Docker]]></category><dc:creator><![CDATA[Jan Wagner]]></dc:creator><pubDate>Fri, 29 Jan 2016 20:46:00 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>Since ages I wanted have replaced this freaking backup solution of our Network Equipment based on some hacky shell scripts and <a href="https://en.wikipedia.org/wiki/Expect?ref=devlog.cyconet.org"><code>expect</code></a> uploading the configs on a <a href="https://en.wikipedia.org/wiki/Trivial_File_Transfer_Protocol?ref=devlog.cyconet.org">TFTP</a> server.</p>
<p>Years ago I stumbled upon <a href="https://en.wikipedia.org/wiki/RANCID_%28software%29?ref=devlog.cyconet.org">RANCID</a> (Really Awesome New Cisco confIg Differ) but had no time to implement it. Now I returned to my idea to get rid of all our old crap.<br>
I don&apos;t know where, I think it was at <a href="http://www.denog.de/meetings/denog2/agenda.php?ref=devlog.cyconet.org#agenda9">DENOG2</a>, I saw RANCID coupled with a <a href="https://en.wikipedia.org/wiki/Version_control?ref=devlog.cyconet.org">VCS</a>, where the <a href="https://en.wikipedia.org/wiki/Network_operations_center?ref=devlog.cyconet.org">NOC</a> was notified about configuration (and inventory) changes by mailing the configuration diff and the history was indeed in the VCS.<br>
The good old RANCID seems not to support to write into a VCS out of the box. But for the rescue there is <em><a href="https://github.com/dotwaffle/rancid-git?ref=devlog.cyconet.org">rancid-git</a></em>, a fork that promises git extensions and support for colorized emails. So far so good.</p>
<p>While I was searching for a VCS capable RANCID, somewhere under a stone I found <a href="https://github.com/ytti/oxidized?ref=devlog.cyconet.org">Oxidized</a>, a &apos;silly attempt at rancid&apos;. Looking at it, it seems more sophisticated, so I thought this might be the right attempt. Unfortunately there is no Debian package available, but I found an <a href="https://bugs.debian.org/797000?ref=devlog.cyconet.org">ITP</a> created by <a href="https://qa.debian.org/developer.php?login=genannt%40debian.org&amp;ref=devlog.cyconet.org">Jonas</a>.</p>
<p>Anyway, for just looking into it, I thought the Docker path for a testbed might be a good idea, as no Debian package ist available (yet).</p>
<p>For oxidized <a href="https://github.com/ytti/oxidized?ref=devlog.cyconet.org#configuration">configuration</a> is only a <code>config</code>file needed and as nodes <a href="https://github.com/ytti/oxidized?ref=devlog.cyconet.org#source">source</a> a rancid compatible <code>router.db</code> file can be used (beside SQLite and http backend). A migration into a production environment seems pretty easy. So I gave it a go.</p>
<p>I assume Docker is <a href="https://devlog.cyconet.org/2016/01/14/running-ghost-blogging-platform-via-docker/#installingdocker">installed</a> already. There seems to be a Docker image on <a href="https://hub.docker.com/r/oxidized/oxidized/?ref=devlog.cyconet.org">Docker Hub</a>, that looks official, but it seems not maintained (actually). An <a href="https://github.com/ytti/oxidized/issues/297?ref=devlog.cyconet.org">issue</a> is open for automated building the image.</p>
<h2 id="creatingoxidizedcontainerimage">Creating Oxidized container image</h2>
<p>The official documentation <a href="https://github.com/ytti/oxidized?ref=devlog.cyconet.org#running-with-docker">describes</a> the procedure. I used a slightly different approach.</p>
<pre><code>docking-station:~# mkdir -p /srv/docker/oxidized/
docking-station:~# git clone https://github.com/ytti/oxidized \
 /srv/docker/oxidized/oxidized.git
docking-station:~# docker build -q -t oxidized/oxidized:latest \
 /srv/docker/oxidized/oxidized.git
</code></pre>
<p>I thought it might be a good idea to also tag the image with the actual version of the gem.</p>
<pre><code>docking-station:~# docker tag oxidized/oxidized:latest \
 oxidized/oxidized:0.11.0
docking-station:~# docker images | grep oxidized
oxidized/oxidized   latest    35a325792078  15 seconds ago  496.1 MB
oxidized/oxidized   0.11.0    35a325792078  15 seconds ago  496.1 MB
</code></pre>
<p>Create initial default configuration like described in the documentation.</p>
<pre><code>docking-station:~# mkir -p /srv/docker/oxidized/.config/
docking-station:~# docker run -e CONFIG_RELOAD_INTERVAL=600 \
 -v /srv/docker/oxidized/.config/:/root/.config/oxidized \
 -p 8888:8888/tcp -t oxidized/oxidized:latest oxidized
</code></pre>
<h2 id="adjustingconfiguration">Adjusting configuration</h2>
<p>After this I adjusted the default configuration for writing a log, the nodes config into a bare <a href="https://github.com/ytti/oxidized?ref=devlog.cyconet.org#output-git">git</a>, having nodes <a href="https://github.com/ytti/oxidized?ref=devlog.cyconet.org#advanced-configuration">secrets in router.db</a> and some <a href="https://github.com/ytti/oxidized?ref=devlog.cyconet.org#hooks">hooks</a> for debugging.</p>
<script src="https://gist.github.com/waja/58f9c29a3cd25ca1670c.js"></script>
<h2 id="creatingnodeconfiguration">Creating node configuration</h2>
<pre><code>docking-station:~# echo &quot;7204vxr.lab.cyconet.org:cisco:admin:password:enable&quot; &gt;&gt; \
 /srv/docker/oxidized/.config/router.db
docking-station:~# echo &quot;ccr1036.lab.cyconet.org:routeros:admin:password&quot; &gt;&gt; \
 /srv/docker/oxidized/.config/router.db
</code></pre>
<h2 id="startingtheoxidizedbeast">Starting the oxidized beast</h2>
<pre><code>docking-station:~# docker run -e CONFIG_RELOAD_INTERVAL=600 \
 -v /srv/docker/oxidized/.config/:/root/.config/oxidized \
 -p 8888:8888/tcp -t oxidized/oxidized:latest oxidized
Puma 2.16.0 starting...
* Min threads: 0, max threads: 16
* Environment: development
* Listening on tcp://127.0.0.1:8888
</code></pre>
<p>If you want to have the container get started with the docker daemon automatically, you can start the container with <code>--restart always</code> and docker will take care of it. If I wanted to make it running permanent, I would use a <a href="https://devlog.cyconet.org/2016/01/14/running-ghost-blogging-platform-via-docker/#makingghostcontainerimagerunforeverhttpdocsghostorgplinstallationdeploymakingghostrunforever">systemd unitfile</a>.</p>
<h2 id="reloadconfigurationimmediately">Reload configuration immediately</h2>
<p>If you don&apos;t want to wait to automatically reload of the configuration, you can trigger it.</p>
<pre><code>docking-station:~# curl -s http://localhost:8888/reload?format=json \
 -O /dev/null
docking-station:~# tail -2 /srv/docker/oxidized/.config/log/oxidized.log
I, [2016-01-29T16:50:46.971904 #1]  INFO -- : Oxidized starting, running as pid 1
I, [2016-01-29T16:50:47.073307 #1]  INFO -- : Loaded 2 nodes
</code></pre>
<h2 id="writingnodesconfiguration">Writing nodes configuration</h2>
<pre><code>docking-station:/srv/docker/oxidized/.config/oxidized.git# git shortlog
Oxidizied (2):
      update 7204vxr.lab.cyconet.org
      update ccr1036.lab.cyconet.org
</code></pre>
<p>Writing the nodes configurations into a local bare git repository is neat but far from perfect. It would be cool to have all the stuff in a central VCS. So I&apos;m pushing it every 5 minutes into one with a cron job.</p>
<pre><code>docking-station:~# cat /etc/cron.d/doxidized 
# m h dom mon dow user  command                                                 
*/5 * * * *	root	$(/srv/docker/oxidized/bin/oxidized_cron_git_push.sh)
docking-station:~# cat /srv/docker/oxidized/bin/oxidized_cron_git_push.sh
#!/bin/bash
DOCKER_OXIDIZED_BASE=&quot;/srv/docker/oxidized/&quot;
OXIDIZED_GIT_DIR=&quot;.config/oxidized.git&quot;

cd ${DOCKER_OXIDIZED_BASE}/${OXIDIZED_GIT_DIR}
git push origin master --quiet
</code></pre>
<p>Now having all the nodes configurations in a <a href="https://en.wikipedia.org/wiki/Comparison_of_source_code_hosting_facilities?ref=devlog.cyconet.org">source code hosting system</a>, we can browse the configurations, changes, history and even establish notifications for changes. Mission accomplished!</p>
<p>Now I can test the coverage of our equipment. The last thing that would make me super happy, a oxidized Debian package!</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Using nginx as reverse proxy (for containered Ghost)]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>In some cases it might be a good idea to use a <a href="http://en.wikipedia.org/wiki/Proxy_server?ref=devlog.cyconet.org">reverse proxy</a> in front of a web application. <a href="https://en.wikipedia.org/wiki/Nginx?ref=devlog.cyconet.org">Nginx</a> is a very common solution for this scenario these days. As I started with containers for some of my playgrounds, I decided to go this route.</p>
<h3 id="containersecurity">Container security</h3>
<p><img src="https://devlog.cyconet.org/content/images/2016/01/docker.png" alt loading="lazy"><br>
When</p>]]></description><link>https://devlog.cyconet.org/2016/01/21/using-nginx-as-reverse-proxy-for-ghost/</link><guid isPermaLink="false">5cd80c7992418c000124741a</guid><category><![CDATA[Planet]]></category><category><![CDATA[Debian]]></category><category><![CDATA[Docker]]></category><category><![CDATA[OpenSource]]></category><category><![CDATA[Nginx]]></category><dc:creator><![CDATA[Jan Wagner]]></dc:creator><pubDate>Thu, 21 Jan 2016 11:26:10 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>In some cases it might be a good idea to use a <a href="http://en.wikipedia.org/wiki/Proxy_server?ref=devlog.cyconet.org">reverse proxy</a> in front of a web application. <a href="https://en.wikipedia.org/wiki/Nginx?ref=devlog.cyconet.org">Nginx</a> is a very common solution for this scenario these days. As I started with containers for some of my playgrounds, I decided to go this route.</p>
<h3 id="containersecurity">Container security</h3>
<p><img src="https://devlog.cyconet.org/content/images/2016/01/docker.png" alt loading="lazy"><br>
When looking around to implement a nginx in front of a Docker web application, in most cases nginx itself is also a Docker container.<br>
In my eyes Docker containers have a huge disadvantage. To get updated software (at least security related) into production, you have to hope that your container image is well maintained or you have to care about it yourself. If this not the case, you might <a href="http://www.banyanops.com/blog/analyzing-docker-hub/?ref=devlog.cyconet.org">worry</a>.<br>
As long as you don&apos;t have container solutions deployed in large scale (and make use of automatically rebuilding and deploying your container images) I would recommend to keep the footprint of your containered applications as small as possible from security point of view.</p>
<p>So I decided to run my nginx on the same system where the Docker web applications are living, but you can also have it placed on a system in front of your container systems. Updates are supplied via usual Distribution security updates.</p>
<h3 id="installingnginx">Installing nginx</h3>
<p><img src="https://devlog.cyconet.org/content/images/2016/01/nginx.png" alt loading="lazy"></p>
<pre><code># aptitude install nginx
</code></pre>
<p>I don&apos;t will advise you on the usual steps about setting up nginx, but will focus on things required to proxy into your container web application.</p>
<h3 id="configurationofnginx">Configuration of nginx</h3>
<p>As our <a href="http://hub.docker.com/_/ghost/?ref=devlog.cyconet.org">Docker container</a> for <a href="http://ghost.org/about?ref=devlog.cyconet.org">Ghost</a> <a href="http://docs.ghost.org/pl/installation/deploy/?ref=devlog.cyconet.org#making-ghost-run-forever">exposes</a> port 2368, we need to define our <a href="http://nginx.org/en/docs/http/ngx_http_upstream_module.html?ref=devlog.cyconet.org">upstream</a> server. I&apos;ve done that in <code>conf.d/docker-ghost.conf</code>.</p>
<pre><code>upstream docker-ghost {
  server localhost:2368;
}
</code></pre>
<p>The vHost configuration can be taken into <code>/etc/nginx/nginx.conf</code> but I would recommend to use a config file in <code>/etc/nginx/sites-available/</code> instead.</p>
<pre><code>server {
  listen 80;
  server_name log.cyconet.org;

  include /etc/nginx/snippets/ghost_vhost.conf;

  location / {
    proxy_pass                          http://docker-ghost;
    proxy_set_header  Host              $http_host;   # required for docker client&apos;s sake
    proxy_set_header  X-Real-IP         $remote_addr; # pass on real client&apos;s IP
    proxy_set_header  X-Forwarded-For   $proxy_add_x_forwarded_for;
    proxy_set_header  X-Forwarded-Proto $scheme;
    proxy_read_timeout                  900;
  }
}
</code></pre>
<p>Let&apos;s enable the configuration and reload nginx:</p>
<pre><code># ln -s ../sites-available/ghost.conf /etc/nginx/sites-enabled/ghost.conf &amp;&amp; \
 service nginx configtest &amp;&amp; service nginx reload
</code></pre>
<h3 id="goingfurther">Going further</h3>
<p>This is a very basic configuration. You might think about delivering static content (like images) <a href="https://gist.github.com/pnommensen/707b5519766ba45366dd?ref=devlog.cyconet.org#2-location-blocks-for-static-file-requests-like-css-js-and-images">directly</a> from your Docker <a href="https://docs.docker.com/engine/userguide/dockervolumes/?ref=devlog.cyconet.org#mount-a-host-directory-as-a-data-volume">data volume</a>, <a href="https://www.nginx.com/blog/nginx-caching-guide/?ref=devlog.cyconet.org">caching</a> and maybe <a href="http://nginx.org/en/docs/http/configuring_https_servers.html?ref=devlog.cyconet.org">encryption</a>.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Trying icinga2 and icingaweb2 with Docker]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>In case you ever wanted to look at <a href="http://www.icinga.org/icinga/icinga-2/?ref=devlog.cyconet.org">Icinga2</a>, even into <a href="https://www.icinga.org/icinga/icinga-2/distributed-monitoring/?ref=devlog.cyconet.org">distributed</a> features, without messing with installing whole server setups, this might interesting for you.</p>
<p>At first, you need to have a running Docker on your system. For more information, have a look into my <a href="https://devlog.cyconet.org/2016/01/14/running-ghost-blogging-platform-via-docker/#installingdocker">previous post</a>!</p>
<h3 id="initiatingdockerimages">Initiating Docker images</h3>]]></description><link>https://devlog.cyconet.org/2016/01/19/trying-icinga2-and-icingaweb2-with-docker/</link><guid isPermaLink="false">5cd80c7992418c0001247419</guid><category><![CDATA[Planet]]></category><category><![CDATA[Debian]]></category><category><![CDATA[Monitoring]]></category><category><![CDATA[Icinga]]></category><category><![CDATA[OpenSource]]></category><category><![CDATA[Docker]]></category><category><![CDATA[Icinga2]]></category><dc:creator><![CDATA[Jan Wagner]]></dc:creator><pubDate>Tue, 19 Jan 2016 08:44:00 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>In case you ever wanted to look at <a href="http://www.icinga.org/icinga/icinga-2/?ref=devlog.cyconet.org">Icinga2</a>, even into <a href="https://www.icinga.org/icinga/icinga-2/distributed-monitoring/?ref=devlog.cyconet.org">distributed</a> features, without messing with installing whole server setups, this might interesting for you.</p>
<p>At first, you need to have a running Docker on your system. For more information, have a look into my <a href="https://devlog.cyconet.org/2016/01/14/running-ghost-blogging-platform-via-docker/#installingdocker">previous post</a>!</p>
<h3 id="initiatingdockerimages">Initiating Docker images</h3>
<pre><code class="language-sh">$ git clone https://github.com/joshuacox/docker-icinga2.git &amp;&amp; \
  cd docker-icinga2
$ make temp
[...]
$ make grab
[...]
$ make prod
[...]
</code></pre>
<h3 id="settingicingaweb2password">Setting <a href="http://www.icinga.org/icinga/icinga-web-2/?ref=devlog.cyconet.org">IcingaWeb2</a> password</h3>
<p>(Or using the <a href="https://github.com/joshuacox/docker-icinga2/blob/master/README.md?ref=devlog.cyconet.org#icinga-web-2">default</a> one)</p>
<pre><code class="language-sh">$ make enter
docker exec -i -t `cat cid` /bin/bash
root@ce705e592611:/# openssl passwd -1 f00b4r
$1$jgAqBcIm$aQxyTPIniE1hx4VtIsWvt/
root@ce705e592611:/# mysql -h mysql icingaweb2 -p -e \
  &quot;UPDATE icingaweb_user SET password_hash=&apos;$1$jgAqBcIm$aQxyTPIniE1hx4VtIsWvt/&apos; WHERE name=&apos;icingaadmin&apos;;&quot;
Enter password:
root@ce705e592611:/# exit
</code></pre>
<h3 id="settingicingaclassicuipassword">Setting <a href="http://www.icinga.org/category/webinterface/classic-ui/?ref=devlog.cyconet.org">Icinga Classic UI</a> password</h3>
<pre><code class="language-sh">$ make enter
docker exec -i -t `cat cid` /bin/bash
root@ce705e592611:/# htpasswd /etc/icinga2-classicui/htpasswd.users icingaadmin
New password: 
Re-type new password: 
Adding password for user icingaadmin
root@ce705e592611:/# exit
</code></pre>
<h3 id="cleaningthingsupandmakingpermanent">Cleaning things up and making permanent</h3>
<pre><code>$ docker stop icinga2 &amp;&amp; docker stop icinga2-mysql
icinga2
icinga2-mysql
$ cp -a /tmp/datadir ~/docker-icinga2.datadir
$ echo &quot;~/docker-icinga2.datadir&quot; &gt; ./DATADIR
$ docker start icinga2-mysql &amp;&amp; rm cid &amp;&amp; docker rm icinga2 &amp;&amp; \
  make runprod
icinga2-mysql
icinga2
chmod 777 /tmp/tmp.08c34zjRMpDOCKERTMP
d34d56258d50957492560f481093525795d547a1c8fc985e178b2a29b313d47a
</code></pre>
<p>Now you should be able to access the IcingaWeb2 web interface on <a href="http://localhost:4080/icingaweb2?ref=devlog.cyconet.org">http://localhost:4080/icingaweb2</a> and the Icinga Classic UI web interface at <a href="http://localhost:4080/icinga2-classicui?ref=devlog.cyconet.org">http://localhost:4080/icinga2-classicui</a>.</p>
<p>For further information about this Docker setup please consult the <a href="https://github.com/joshuacox/docker-icinga2/blob/master/README.md?ref=devlog.cyconet.org">documentation</a> written by <a href="http://joshuacox.github.io/?ref=devlog.cyconet.org">Joshua Cox</a> who has worked on this project. For information about Icinga2 itself, please have a look into the <a href="http://docs.icinga.org/icinga2/latest/doc/module/icinga2/toc?ref=devlog.cyconet.org">Icinga2 Documentation</a>.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Running Ghost blogging platform via Docker]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>When I was thinking about using <a href="http://ghost.org/?ref=devlog.cyconet.org">Ghost</a>, I did read the <a href="https://github.com/TryGhost/Ghost?ref=devlog.cyconet.org#quick-start-install">installations guide</a> and then I just closed the browser window.<br>
I didn&apos;t wanted to install <a href="http://www.npmjs.com/?ref=devlog.cyconet.org">npm</a>, yet another package manager, and just <a href="http://docs.ghost.org/pl/installation/deploy/?ref=devlog.cyconet.org#making-ghost-run-forever">hack</a> init scripts. Not speaking about <a href="http://support.ghost.org/how-to-upgrade/?ref=devlog.cyconet.org">updating</a> Ghost itself.</p>
<p>Some weeks later I did think</p>]]></description><link>https://devlog.cyconet.org/2016/01/14/running-ghost-blogging-platform-via-docker/</link><guid isPermaLink="false">5cd80c7992418c0001247418</guid><category><![CDATA[Ghost]]></category><category><![CDATA[OpenSource]]></category><category><![CDATA[Debian]]></category><category><![CDATA[Planet]]></category><category><![CDATA[Blogging]]></category><category><![CDATA[Docker]]></category><dc:creator><![CDATA[Jan Wagner]]></dc:creator><pubDate>Thu, 14 Jan 2016 18:36:42 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>When I was thinking about using <a href="http://ghost.org/?ref=devlog.cyconet.org">Ghost</a>, I did read the <a href="https://github.com/TryGhost/Ghost?ref=devlog.cyconet.org#quick-start-install">installations guide</a> and then I just closed the browser window.<br>
I didn&apos;t wanted to install <a href="http://www.npmjs.com/?ref=devlog.cyconet.org">npm</a>, yet another package manager, and just <a href="http://docs.ghost.org/pl/installation/deploy/?ref=devlog.cyconet.org#making-ghost-run-forever">hack</a> init scripts. Not speaking about <a href="http://support.ghost.org/how-to-upgrade/?ref=devlog.cyconet.org">updating</a> Ghost itself.</p>
<p>Some weeks later I did think about using Ghost again. It has a nice <a href="http://support.ghost.org/markdown-guide/?ref=devlog.cyconet.org">Markdown</a> Editor and some nice other <a href="http://ghost.org/features%e2%80%8e/?ref=devlog.cyconet.org">features</a>. Since everybody is jumping on the <a href="http://en.wikipedia.org/wiki/Docker_(software)?ref=devlog.cyconet.org">Docker</a> band wagon actually and I had used it for some tests already, I thought trying the <a href="http://hub.docker.com/_/ghost/?ref=devlog.cyconet.org">Ghost Docker image</a> might be a good idea.</p>
<p>If you are interested into how I did that, read on.</p>
<p>I suppose you have installed a stock <a href="http://debian.org/?ref=devlog.cyconet.org">Debian</a> <a href="http://wiki.debian.org/DebianJessie?ref=devlog.cyconet.org">Jessie</a>.</p>
<h3 id="installingdocker">Installing Docker</h3>
<script src="https://gist.github.com/waja/01ba2641f93f461044f9.js"></script>
<h3 id="pullingthedockerimage">Pulling the Docker image</h3>
<p>Just in case you didn&apos;t, you need to (re)start <code>docker</code> to work with <code>service docker restart</code></p>
<pre><code># docker pull ghost
</code></pre>
<script type="text/javascript" src="https://asciinema.org/a/5huq5pwzsb8mll2x073g6tn2o.js" id="asciicast-5huq5pwzsb8mll2x073g6tn2o" async></script>
<h3 id="makingghostcontainerimagerunforever"><a href="http://docs.ghost.org/pl/installation/deploy/?ref=devlog.cyconet.org#making-ghost-run-forever">Making Ghost (container image) run forever</a></h3>
<p>I did not like <a href="http://en.wikipedia.org/wiki/Systemd?ref=devlog.cyconet.org">systemd</a> in the first place for many reasons. But in some circumstances it makes sense. In case of handling a Docker container, using a systemd <a href="http://www.freedesktop.org/software/systemd/man/systemd.unit.html?ref=devlog.cyconet.org">unit file</a> makes life much easier.</p>
<pre><code># mkdir -p /srv/docker/ghost/
# cat &gt; /etc/systemd/system/ghost.service &lt;&lt; EOF
[Unit]
Description=GHost Service
After=docker.service
Requires=docker.service

[Service]
ExecStartPre=-/usr/bin/docker kill ghost
ExecStartPre=-/usr/bin/docker rm ghost
ExecStartPre=-/usr/bin/docker pull ghost
ExecStart=/usr/bin/docker run  --name ghost --publish 2368:2368 --env &apos;NODE_ENV=production&apos; --volume /srv/docker/ghost/:/var/lib/ghost ghost
ExecStop=/usr/bin/docker stop ghost

[Install]
WantedBy=multi-user.target
EOF
# systemctl enable ghost &amp;&amp; systemctl daemon-reload &amp;&amp; systemctl start ghost 
</code></pre>
<p>This will start your container on start and even is looking for a new Docker image and is fetching it, if needed. If you don&apos;t like this behavior, just comment out the line in the config and reread it with <code>systemctl daemon-reload</code>.</p>
<p>Now you should have listening something on port 2368:</p>
<pre><code># netstat -tapn | grep 2368
tcp6       0      0 :::2368                 :::*                    LISTEN      7061/docker-proxy
</code></pre>
<p><em>Update:</em> Jo&#xEB;l Dinel did send me a mail, that starting your Docker container with <code>--restart always</code> will take care that it is brought up again if Docker or (even) the whole system will get restarted. For real I used that before and might be a lightweight solution, but I liked the systemd unit file solution a lot more.</p>
<h3 id="persistentdata">Persistent Data</h3>
<p>Thanks to the Docker <a href="http://docs.docker.com/engine/reference/commandline/run/?ref=devlog.cyconet.org#mount-volume-v-read-only">mount</a> option you can find all your data in <code>/srv/docker/ghost/</code>. So your blog will still have content, even if the ghost Docker images is updated:</p>
<pre><code># ls /srv/docker/ghost/
apps  config.js  data  images  themes
</code></pre>
<h3 id="accessingthecontainer">Accessing the container</h3>
<p>To kick your ghost into production, it might be useful if you make it available on port 80 at least. This can be done for example by changing your Docker <a href="http://docs.docker.com/engine/reference/commandline/run/?ref=devlog.cyconet.org#publish-or-expose-port-p-expose">publish</a> configuration or adding a <a href="http://en.wikipedia.org/wiki/Network_address_translation?ref=devlog.cyconet.org#DNAT">DNAT</a> to your <a href="https://en.wikipedia.org/wiki/Firewall_(computing)?ref=devlog.cyconet.org">firewall</a>.</p>
<p>But I would recommand using a <a href="http://en.wikipedia.org/wiki/Proxy_server?ref=devlog.cyconet.org">proxy</a> in front of your Docker container. This might be part of one of my next articles.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[New blogging engine]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>Exactly 3 years after I <a href="https://devlog.cyconet.org/2013/01/04/new-blogging-engine/">moved</a> on from <a href="http://wordpress.org/?ref=devlog.cyconet.org">Wordpress</a> to <a href="http://octopress.org/?ref=devlog.cyconet.org">Octopress</a> I thought it&apos;s time for something new. Some of you might have noticed that I&apos;ve not much blogged in the past.</p>
<p>A new Octopress version was <a href="http://octopress.org/2015/01/15/octopress-3.0-is-coming/?ref=devlog.cyconet.org">promised</a> a year ago. While I&apos;ve liked</p>]]></description><link>https://devlog.cyconet.org/2016/01/09/new-blogging-engine-2/</link><guid isPermaLink="false">5cd80c7992418c0001247417</guid><category><![CDATA[Planet]]></category><category><![CDATA[OpenSource]]></category><category><![CDATA[Ghost]]></category><category><![CDATA[Blogging]]></category><dc:creator><![CDATA[Jan Wagner]]></dc:creator><pubDate>Sat, 09 Jan 2016 01:15:14 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>Exactly 3 years after I <a href="https://devlog.cyconet.org/2013/01/04/new-blogging-engine/">moved</a> on from <a href="http://wordpress.org/?ref=devlog.cyconet.org">Wordpress</a> to <a href="http://octopress.org/?ref=devlog.cyconet.org">Octopress</a> I thought it&apos;s time for something new. Some of you might have noticed that I&apos;ve not much blogged in the past.</p>
<p>A new Octopress version was <a href="http://octopress.org/2015/01/15/octopress-3.0-is-coming/?ref=devlog.cyconet.org">promised</a> a year ago. While I&apos;ve liked writing in <a href="http://en.wikipedia.org/wiki/Markdown?ref=devlog.cyconet.org">Markdown</a>, the deployment workflow was horribly broken and keeping Octopress up to date was impossible. I blogged so seldom that I needed to consult the documentation every time in the recent days.</p>
<p>After looking into several projects, <a href="http://ghost.org/?ref=devlog.cyconet.org">Ghost</a> seems most promising. And the good news: it has a split-screen Markdown editor with integrated live preview.</p>
<p><a href="http://ghost.org/?ref=devlog.cyconet.org"><img src="https://ghost.org/images/ghost.png" alt="The Ghost Logo" loading="lazy"></a></p>
<p>There are several migration scripts out there, but I only found <a href="https://github.com/mikl/ghost-octopress-converter?ref=devlog.cyconet.org">one</a> which was able to also export tags. The <a href="http://support.ghost.org/import-and-export-my-ghost-blog-settings-and-data/?ref=devlog.cyconet.org">import</a> into Ghost worked like a charm.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Wordpress dictionary attack]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>Today early in the morning my monitoring system notified me about unusual high outgoing traffic on my hosting platform. I traced the problem down the webserver which is also hosting this abandoned website.</p>
<p><img src="https://farm8.staticflickr.com/7598/16717060209_6a46fd81f4_o_d.png" alt loading="lazy"></p>
<p>Looking into this with <a href="http://iptraf.seul.org/?ref=devlog.cyconet.org"><em>iptraf</em></a> revealed that this traffic is coming only from one IP. At first</p>]]></description><link>https://devlog.cyconet.org/2015/03/23/wordpress-dictionary-attack/</link><guid isPermaLink="false">5cd80c7992418c0001247416</guid><category><![CDATA[Planet]]></category><category><![CDATA[Networking]]></category><category><![CDATA[wordpress]]></category><dc:creator><![CDATA[Jan Wagner]]></dc:creator><pubDate>Mon, 23 Mar 2015 08:23:00 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>Today early in the morning my monitoring system notified me about unusual high outgoing traffic on my hosting platform. I traced the problem down the webserver which is also hosting this abandoned website.</p>
<p><img src="https://farm8.staticflickr.com/7598/16717060209_6a46fd81f4_o_d.png" alt loading="lazy"></p>
<p>Looking into this with <a href="http://iptraf.seul.org/?ref=devlog.cyconet.org"><em>iptraf</em></a> revealed that this traffic is coming only from one IP. At first I thought anybody might grabbing my Debian packages from <a href="http://ftp.cyconet.org/?ref=devlog.cyconet.org">ftp.cyconet.org</a>. But no, it was targeting my highly sophisticated blogging plattform.</p>
<pre><code>$ grep 46.235.43.146 /var/log/nginx/vhosts/access_logs/blog.waja.info-access.log | tail -2
46.235.43.146 - - [23/Mar/2015:08:20:12 +0100] &quot;POST /wp-login.php HTTP/1.0&quot; 404 22106 &quot;-&quot; &quot;-&quot;
46.235.43.146 - - [23/Mar/2015:08:20:12 +0100] &quot;POST /wp-login.php HTTP/1.0&quot; 404 22106 &quot;-&quot; &quot;-&quot;
$ grep 46.235.43.146 /var/log/nginx/vhosts/access_logs/blog.waja.info-access.log | wc -l
83676
$ grep 46.235.43.146 /var/log/nginx/vhosts/access_logs/blog.waja.info-access.log | wc -l
83782
$ grep 46.235.43.146 /var/log/nginx/vhosts/access_logs/blog.waja.info-access.log | \
grep -v wp-login.php | wc -l
0
</code></pre>
<p>It makes me really sad to see, that dictionary attacks are smashing with such a high power these days, even without evaluating the 404 response.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Updated Monitoring Plugins Version is coming soon]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>Three months ago version 2.0 of <a href="https://www.monitoring-plugins.org/?ref=devlog.cyconet.org">Monitoring Plugins</a> was released. Since then many <a href="https://github.com/monitoring-plugins/monitoring-plugins/compare/v2.0...master?ref=devlog.cyconet.org">changes</a> were integrated. You can find a quick overview in the upstream <a href="https://github.com/monitoring-plugins/monitoring-plugins/blob/master/NEWS?ref=devlog.cyconet.org">NEWS</a>.</p>
<p>Now it&apos;s time to move forward and a new release is expected soon. It would be very welcome if you could</p>]]></description><link>https://devlog.cyconet.org/2014/10/09/updated-monitoring-plugins-version-is-coming-soon/</link><guid isPermaLink="false">5cd80c7992418c0001247415</guid><category><![CDATA[Planet]]></category><category><![CDATA[OpenSource]]></category><category><![CDATA[Plugins]]></category><category><![CDATA[Icinga]]></category><category><![CDATA[nagios]]></category><category><![CDATA[Monitoring]]></category><category><![CDATA[Naemon]]></category><dc:creator><![CDATA[Jan Wagner]]></dc:creator><pubDate>Thu, 09 Oct 2014 11:46:00 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>Three months ago version 2.0 of <a href="https://www.monitoring-plugins.org/?ref=devlog.cyconet.org">Monitoring Plugins</a> was released. Since then many <a href="https://github.com/monitoring-plugins/monitoring-plugins/compare/v2.0...master?ref=devlog.cyconet.org">changes</a> were integrated. You can find a quick overview in the upstream <a href="https://github.com/monitoring-plugins/monitoring-plugins/blob/master/NEWS?ref=devlog.cyconet.org">NEWS</a>.</p>
<p>Now it&apos;s time to move forward and a new release is expected soon. It would be very welcome if you could give the latest <a href="https://www.monitoring-plugins.org/download/snapshot/monitoring-plugins-master.tar.gz?ref=devlog.cyconet.org">source snapshot</a> a try.<br>
You also can give the <a href="https://www.debian.org/intro/about?ref=devlog.cyconet.org">Debian</a> packages a go and grab them from my &apos;unstable&apos; and &apos;wheezy-backports&apos; repositories at <a href="http://ftp.cyconet.org/instructions?ref=devlog.cyconet.org">http://ftp.cyconet.org/</a>. Right after the stable release, the new packages will be uploaded into Debian unstable. The whole packaging changes can be observed in the <a href="http://anonscm.debian.org/gitweb/?p=pkg-nagios%2Fpkg-monitoring-plugins.git%3Ba%3Dblob_plain%3Bf%3Ddebian%2Fchangelog%3Bhb%3DHEAD&amp;ref=devlog.cyconet.org">changelog</a>.</p>
<p>Feedback is very appreciated via <a href="https://github.com/monitoring-plugins/monitoring-plugins/issues?ref=devlog.cyconet.org">Issue tracker</a> or the <a href="https://www.monitoring-plugins.org/list/listinfo/devel?ref=devlog.cyconet.org">Monitoring Plugins Development Mailinglist</a>.</p>
<p><strong>Update:</strong> The official call for testing is <a href="https://www.monitoring-plugins.org/archive/devel/2014-October/009882.html?ref=devlog.cyconet.org">available</a>.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Redis HA with Redis Sentinel and VIP]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>For an actual project we decided to use <a href="http://redis.io/?ref=devlog.cyconet.org">Redis</a> for some reasons. As there is availability a critical part, we discovered that <a href="http://redis.io/topics/sentinel?ref=devlog.cyconet.org">Redis Sentinel</a> can monitor Redis and handle an automatic master failover to a available slave.</p>
<p>Setting up the Redis replication was straight forward and even setting up Sentinel.</p>]]></description><link>https://devlog.cyconet.org/2014/09/25/redis-ha-with-redis-sentinel-and-vip/</link><guid isPermaLink="false">5cd80c7992418c0001247414</guid><category><![CDATA[Planet]]></category><category><![CDATA[Linux]]></category><category><![CDATA[OpenSource]]></category><category><![CDATA[Software]]></category><category><![CDATA[HighAvailability]]></category><category><![CDATA[Redis]]></category><dc:creator><![CDATA[Jan Wagner]]></dc:creator><pubDate>Thu, 25 Sep 2014 19:56:00 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>For an actual project we decided to use <a href="http://redis.io/?ref=devlog.cyconet.org">Redis</a> for some reasons. As there is availability a critical part, we discovered that <a href="http://redis.io/topics/sentinel?ref=devlog.cyconet.org">Redis Sentinel</a> can monitor Redis and handle an automatic master failover to a available slave.</p>
<p>Setting up the Redis replication was straight forward and even setting up Sentinel. Please keep in mind, if you configure Redis to require an authentication password, you even need to provide that for the replication process (masterauth) and for the Sentinel connection (auth-pass).</p>
<p><img src="http://upload.wikimedia.org/wikipedia/en/thumb/6/6b/Redis_Logo.svg/467px-Redis_Logo.svg.png" alt loading="lazy"></p>
<p>The more interesting part is, how to migrate over the clients to the new master in case of a failover process. While Redis Sentinel could also be used as configuration provider, we decided not to use this feature, as the application needs to request the actual master node from Redis Sentinel much often, which will maybe a performance impact.<br>
The first idea was to use some kind of <a href="http://en.wikipedia.org/wiki/Virtual_Router_Redundancy_Protocol?ref=devlog.cyconet.org">VRRP</a>, implemented into <a href="http://www.keepalived.org/?ref=devlog.cyconet.org">keepalived</a> or something like this. The problem with such a solution is, you need to notify the VRRP process when a redis failover is in progress.<br>
Well, Redis Sentinel has a configuration option called &apos;sentinel client-reconfig-script&apos;:</p>
<pre><code># When the master changed because of a failover a script can be called in
# order to perform application-specific tasks to notify the clients that the
# configuration has changed and the master is at a different address.
# 
# The following arguments are passed to the script:
#
# &lt;master-name&gt; &lt;role&gt; &lt;state&gt; &lt;from-ip&gt; &lt;from-port&gt; &lt;to-ip&gt; &lt;to-port&gt;
#
# &lt;state&gt; is currently always &quot;failover&quot;
# &lt;role&gt; is either &quot;leader&quot; or &quot;observer&quot;
# 
# The arguments from-ip, from-port, to-ip, to-port are used to communicate
# the old address of the master and the new address of the elected slave
# (now a master).
#
# This script should be resistant to multiple invocations.
</code></pre>
<p>This looks pretty good and as there is provided a <code>&lt;role&gt;</code>, I thought it would be a good idea to just call a script which evaluates this value and based on it&apos;s return, it adds the <a href="http://en.wikipedia.org/wiki/Virtual_IP_address?ref=devlog.cyconet.org">VIP</a> to the local network interface, when we get &apos;leader&apos; and removes it when we get &apos;observer&apos;. It turned out that this was not working as <code>&lt;role&gt;</code> didn&apos;t returned &apos;leader&apos; when the local redis instance got master and &apos;observer&apos; when got slave in any case. This was pretty annoying and I was short before giving up.<br>
Fortunately I stumpled upon a (maybe) chinese <a href="http://blog.youyo.info/blog/2014/05/24/redis-cluster/?ref=devlog.cyconet.org">post</a> about Redis Sentinal, where was done the same like I did. On the second look I recognized that the decision was made on <code>${6}</code> which is <code>&lt;to-ip&gt;</code>, nothing more then the new IP of the Redis master instance. So I rewrote my tiny shell script and after some other pitfalls this strategy worked out well.</p>
<script src="https://gist.github.com/waja/301c5ca1b532669a9b6e.js"></script>
<p>Some notes about convergence. Actually it takes round about 6-7 seconds to have the VIP migrated over to the new node after Redis Sentinel notifies a broken master. This is not the best performance, but as we expect this happen not so often, we need to design the application using our Redis setup to handle this (hopefully) rare scenario.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Monitoring Plugins Debian packages]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>You may wonder why the old good <a href="https://packages.qa.debian.org/nagios-plugins?ref=devlog.cyconet.org">nagios-plugins</a> are not up to date in <a href="https://www.debian.org/intro/about?ref=devlog.cyconet.org">Debian</a> <a href="https://www.debian.org/releases/unstable/?ref=devlog.cyconet.org">unstable</a> and <a href="https://www.debian.org/releases/testing/?ref=devlog.cyconet.org">testing</a>.</p>
<p>Since the people behind and maintaining the plugins &lt;= 1.5 were <a href="https://www.monitoring-plugins.org/doc/faq/fork.html?ref=devlog.cyconet.org">forced to rename</a> the software project into <a href="https://www.monitoring-plugins.org/?ref=devlog.cyconet.org">Monitoring Plugins</a> there was some work behind the scenes and much <a href="http://en.wikipedia.org/wiki/Quality_assurance?ref=devlog.cyconet.org">QA</a> work</p>]]></description><link>https://devlog.cyconet.org/2014/08/08/monitoring-plugins-debian-packages/</link><guid isPermaLink="false">5cd80c7992418c0001247413</guid><category><![CDATA[Planet]]></category><category><![CDATA[OpenSource]]></category><category><![CDATA[Plugins]]></category><category><![CDATA[Icinga]]></category><category><![CDATA[nagios]]></category><category><![CDATA[Monitoring]]></category><category><![CDATA[Naemon]]></category><category><![CDATA[Shinken]]></category><category><![CDATA[Sensu]]></category><dc:creator><![CDATA[Jan Wagner]]></dc:creator><pubDate>Fri, 08 Aug 2014 22:03:00 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>You may wonder why the old good <a href="https://packages.qa.debian.org/nagios-plugins?ref=devlog.cyconet.org">nagios-plugins</a> are not up to date in <a href="https://www.debian.org/intro/about?ref=devlog.cyconet.org">Debian</a> <a href="https://www.debian.org/releases/unstable/?ref=devlog.cyconet.org">unstable</a> and <a href="https://www.debian.org/releases/testing/?ref=devlog.cyconet.org">testing</a>.</p>
<p>Since the people behind and maintaining the plugins &lt;= 1.5 were <a href="https://www.monitoring-plugins.org/doc/faq/fork.html?ref=devlog.cyconet.org">forced to rename</a> the software project into <a href="https://www.monitoring-plugins.org/?ref=devlog.cyconet.org">Monitoring Plugins</a> there was some work behind the scenes and much <a href="http://en.wikipedia.org/wiki/Quality_assurance?ref=devlog.cyconet.org">QA</a> work necessary to release the software in a proper state. This happened 4 weeks ago with the <a href="https://www.monitoring-plugins.org/news/release-2-0.html?ref=devlog.cyconet.org">release</a> of the version 2.0 of the Monitoring Plugins.</p>
<p>With one day of delay the package was uploaded into unstable, but did hit the Debian <a href="https://ftp-master.debian.org/new.html?ref=devlog.cyconet.org">NEW queue</a> due the changed package name(s). Now we (and maybe you) are waiting to get them reviewed by <a href="https://ftp-master.debian.org/?ref=devlog.cyconet.org">ftp-master</a>. This will hopefully happen before the <a href="https://www.debian.org/releases/jessie/?ref=devlog.cyconet.org">jessie</a> <a href="https://release.debian.org/jessie/freeze_policy.html?ref=devlog.cyconet.org">freeze</a>.</p>
<p>Until this will happen, you may grab packages for <a href="https://www.debian.org/releases/wheezy/?ref=devlog.cyconet.org">wheezy</a> by the &apos;wheezy-backports&apos; suite from <a href="http://ftp.cyconet.org/debian/?ref=devlog.cyconet.org">ftp.cyconet.org/debian/</a> or &apos;debmon-wheezy&apos; suite from <a href="http://debmon.org/instructions?ref=devlog.cyconet.org">debmon.org</a>. Feedback is much appreciated.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item></channel></rss>