<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>Ha on The IT Hollow</title>
    <link>https://theithollow.com/tags/ha/</link>
    <description>Recent content in Ha on The IT Hollow</description>
    <generator>Hugo</generator>
    <language>en-us</language>
    <lastBuildDate>Mon, 24 Feb 2020 15:05:00 +0000</lastBuildDate>
    <atom:link href="https://theithollow.com/tags/ha/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Highly Available Envoy Proxies for the Kubernetes Control Plane</title>
      <link>https://theithollow.com/2020/02/24/highly-available-envoy-proxies-for-the-kubernetes-control-plane/</link>
      <pubDate>Mon, 24 Feb 2020 15:05:00 +0000</pubDate>
      <guid>https://theithollow.com/2020/02/24/highly-available-envoy-proxies-for-the-kubernetes-control-plane/</guid>
      <description>&lt;p&gt;Recently I was tasked with setting up some virtual machines to be used as a load balancer for a Kubernetes cluster. The environment we were deploying our Kubernetes cluster didn&amp;rsquo;t have a load balancer available, so we thought we&amp;rsquo;d just throw some envoy proxies on some VMs to do the job. This post will show you how the following tasks were completed:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Deploy Envoy on a pair of CentOS7 virtual machines.&lt;/li&gt;
&lt;li&gt;Configure Envoy with health checks for the Kubernetes Control Plane&lt;/li&gt;
&lt;li&gt;Install keepalived on both servers to manage failover.&lt;/li&gt;
&lt;li&gt;Configure keepalived to failover if a server goes offline, or the envoy service is not started.&lt;/li&gt;
&lt;/ol&gt;
&lt;figure&gt;
    &lt;img loading=&#34;lazy&#34; src=&#34;https://assets.theithollow.com/wp-content/uploads/2020/02/image-61-1024x495.png&#34;/&gt; 
&lt;/figure&gt;

&lt;h2 id=&#34;deploy-envoy&#34;&gt;Deploy Envoy&lt;/h2&gt;
&lt;p&gt;The first step will be to setup a pair of CentOS 7 servers. I&amp;rsquo;ve used virtual servers for this post, but baremetal would work the same. Also, similar steps could be used if you prefer debian as your linux flavor.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes HA on vSphere</title>
      <link>https://theithollow.com/2020/01/27/kubernetes-ha-on-vsphere/</link>
      <pubDate>Mon, 27 Jan 2020 15:15:12 +0000</pubDate>
      <guid>https://theithollow.com/2020/01/27/kubernetes-ha-on-vsphere/</guid>
      <description>&lt;p&gt;If you&amp;rsquo;ve been on the operations side of the IT house, you know that one of your primary job functions is to ensure High Availability (HA) of production workloads. This blog post focuses on making sure applications deployed on a vSphere Kubernetes cluster will be highly available.&lt;/p&gt;
&lt;h2 id=&#34;the-control-plane&#34;&gt;The Control Plane&lt;/h2&gt;
&lt;p&gt;Ok, before we talk about workloads, we should discuss the Kubernetes Control plane components. When we deploy Kubernetes on virtual machines, we have to make sure that the brains of the Kubernetes cluster will continue working even if there is a hardware failure. The first step is to make sure that your control plane components are deployed on different physical (ESXi) hosts. This can be done with a vSphere Host Affinity Rule to keep k8s VMs pinned to groups of hosts or anti-affinity rules to make sure two control plane nodes aren&amp;rsquo;t placed on the same host. After this is done, your Load Balancer should be configured to point to your k8s control plane VMs and a health check is configured for the /healthz path.&lt;/p&gt;</description>
    </item>
  </channel>
</rss>
