<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>Scheduler on The IT Hollow</title>
    <link>https://theithollow.com/tags/scheduler/</link>
    <description>Recent content in Scheduler on The IT Hollow</description>
    <generator>Hugo</generator>
    <language>en-us</language>
    <lastBuildDate>Mon, 20 Apr 2020 15:00:00 +0000</lastBuildDate>
    <atom:link href="https://theithollow.com/tags/scheduler/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Kubernetes Resource Requests and Limits</title>
      <link>https://theithollow.com/2020/04/20/kubernetes-resource-requests-and-limits/</link>
      <pubDate>Mon, 20 Apr 2020 15:00:00 +0000</pubDate>
      <guid>https://theithollow.com/2020/04/20/kubernetes-resource-requests-and-limits/</guid>
      <description>&lt;p&gt;Containerizing applications and running them on Kubernetes doesn&amp;rsquo;t mean we can forget all about resource utilization. Our thought process may have changed because we can much more easily scale-out our application as demand increases, but many times we need to consider how our containers might fight with each other for resources. Resource Requests and Limits can be used to help stop the &amp;ldquo;noisy neighbor&amp;rdquo; problem in a Kubernetes Cluster.&lt;/p&gt;
&lt;h2 id=&#34;resource-requests-and-limits---the-theory&#34;&gt;Resource Requests and Limits - The Theory&lt;/h2&gt;
&lt;p&gt;Kubernetes uses the concept of a &amp;ldquo;Resource Request&amp;rdquo; and a &amp;ldquo;Resource Limit&amp;rdquo; when defining how many resources a container within a pod should receive. Lets look at each of these topics on their own, starting with resource requests.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes HA on vSphere</title>
      <link>https://theithollow.com/2020/01/27/kubernetes-ha-on-vsphere/</link>
      <pubDate>Mon, 27 Jan 2020 15:15:12 +0000</pubDate>
      <guid>https://theithollow.com/2020/01/27/kubernetes-ha-on-vsphere/</guid>
      <description>&lt;p&gt;If you&amp;rsquo;ve been on the operations side of the IT house, you know that one of your primary job functions is to ensure High Availability (HA) of production workloads. This blog post focuses on making sure applications deployed on a vSphere Kubernetes cluster will be highly available.&lt;/p&gt;
&lt;h2 id=&#34;the-control-plane&#34;&gt;The Control Plane&lt;/h2&gt;
&lt;p&gt;Ok, before we talk about workloads, we should discuss the Kubernetes Control plane components. When we deploy Kubernetes on virtual machines, we have to make sure that the brains of the Kubernetes cluster will continue working even if there is a hardware failure. The first step is to make sure that your control plane components are deployed on different physical (ESXi) hosts. This can be done with a vSphere Host Affinity Rule to keep k8s VMs pinned to groups of hosts or anti-affinity rules to make sure two control plane nodes aren&amp;rsquo;t placed on the same host. After this is done, your Load Balancer should be configured to point to your k8s control plane VMs and a health check is configured for the /healthz path.&lt;/p&gt;</description>
    </item>
  </channel>
</rss>
