The IT Hollow https://theithollow.com Mon, 20 Aug 2018 14:02:31 +0000 en-US hourly 1 https://wordpress.org/?v=4.9.8 44921653 Add AWS Web Application Firewall to Protect your Apps https://theithollow.com/2018/08/20/add-aws-web-application-firewall-to-protect-your-apps/ https://theithollow.com/2018/08/20/add-aws-web-application-firewall-to-protect-your-apps/#respond Mon, 20 Aug 2018 14:02:31 +0000 https://theithollow.com/?p=9100 Some things change when you move to the cloud, but other things are very much the same. Like protecting your resources from outside threats. There are always no-gooders out there trying to steal data, or cause mayhem like in those Allstate commercials. Our first defense should be well written applications, requiring authentication, etc and with […]

The post Add AWS Web Application Firewall to Protect your Apps appeared first on The IT Hollow.

]]>
Some things change when you move to the cloud, but other things are very much the same. Like protecting your resources from outside threats. There are always no-gooders out there trying to steal data, or cause mayhem like in those Allstate commercials. Our first defense should be well written applications, requiring authentication, etc and with AWS we make sure we’re setting up security groups to limit our access to those resources. How about an extra level of protection from a Web Application Firewall. AWS WAF allows us to leverage some extra protections at the edge to protect us from those bad guys/girls.

Background on WAF

The AWS Web Application Firewall (WAF) allows us to create custom rules to protect us from things like cross-site scripting, SQL injection, or just blocking traffic from certain geographies. If your site isn’t ready for GDPR maybe you block Europe from accessing your site altogether. Of course WAF can also do things like block specific IP addresses that have been identified as bots, etc but we expect all firewalls to be able to do this. WAF is billed at $5 per web ACL per month and another $1 per rule per ACL per month for the configuration. There are additional usage charges added based on the requests at $0.60 per million web requests.

The AWS WAF can be used with an AWS Application Load Balancer or a CloudFront distribution. If your application is hosted on-prem, you could still leverage the AWS WAF by integrating a CloudFront distribution with your application.

There are several parts to deploying a WAF for your application.

  • Conditions – You’ll build a condition to identify the type of traffic or web call that is being made. This could be a source IP Address, a regular expression, a SQL filter, etc. The job here is to identify the types of requests that an action will be take on.
  • Rules – Rules will identify if you plan to allow or block traffic that is identified by a condition, or a catch-all. For example, a rule might block IP Addresses identified by Condition 1, block traffic from a specific geopgraphy identified by Condition 2, and allow any other traffic by default.
  • Web ACL – A group of rules can be added to a Web ACL and the Web ACL is attached to a resource such as an Application Load Balancer or CloudFront Distribution.

Setup Through the Console

The examples below will use a very basic website behind an AWS application load balancer through the AWS console. To begin, navigate to the AWS WAF and Shield services. A familiar getting started screen will show up where you can add your WAF by clicking the “Go to AWS WAF” button.

When the WAF screen opens, click the “Configure web ACL” button which will start the process of walking us through creating conditions and rules as well as the Web ACL.

The first screen gives you and idea of what will be created and how you might set it up. This screen is informational so read it and when you’re ready, click the “Next” button.

Lets create a Web ACL. I’ve named mine HollowACL and there is a CloudWatch metric that will be created as well that shows the statistics for this ACL in the CloudWatch console. Note: it may be useful to keep these names the same for tracking purposes.

Select the region that this will be available in. If you’re using CloudFront, the region should be “Global”, if you’re using an ALB, select the region your ALB is located. After you select the region, you should be able to select the ALB to associate the WAF ACL with and then click the “Next” button.

Now it’s time to create the conditions. I’m keeping this simple and will geo-block requests coming from the United States for giggles and grins. Under the Geo match conditions type click the “Create condition” button to create a new condition. Depending on your own requirements, you may have to choose a different condition type which ultimately would ask for different things as part of the rule.

Give the condition a name and again select a region. Since I selected a geo match condition I’ll need to identify which country to block. When done, be sure to click the “Add location” button to add it to the condition.

Now that our condition(s) are created, lets move on to rules. Click the “Create rule” button.

When you create the rule, give it a name and again a CloudWatch metric so we can review the activity later. Select either “regular” or “rate-based” rule depending on if this should always be active. Note: Rate-based rules would be good for brute force attacks where the first couple of times its allowed and then a block rule is triggered on too many attempts.

Under the conditions, select does or does not for a matching condition and then the type of condition (in this case its a geo rule) and then which condition of that type you’re matching. Add further conditions if needed.

 

We’re taken back to the web ACL screen where we will select whether traffic that matches that rule should be allowed, blocked or counted. Counted is used to monitor traffic, but not take actions on it. You should also select a default action of allow or deny. Click the “Review and create” button.

Review the settings and then click the “Confirm and create” button.

 

The Results

First things first, did it work? Lets try a request to access the web application from a US location (my desktop). We’ll notice that we get a 403 error meaning that we found a live service, but were denied access.

If we look back at the WAF Console, we can select our ACL and see a graph of the metrics we’re looking for. We can also see some samples of the requests that match the rule.

 

By looking at the CloudWatch portal, we can see even more details and we can create alarms (and subsequently take action on those alarms) if we see fit to do so.

 

WAF Through Code

As with most things AWS, you can deploy your WAF conditions, rules, and ACLs through CloudFormation. Below is an example of a simple block IP rule deployed through CloudFormation. The code below should get you started but only includes a sample load balancer, ACL, an IPSet, a rule and an association. You should be aware that if you’re working with rules for a load balancer they are denoted by a type of “AWS::WAFRegional::SOMETHING” whereas WAF objects for CloudFront are just denoted by a type of “AWS::WAF::SOMETHING”

"Resources": {

      "HollowWebLB1": {
          "Type": "AWS::ElasticLoadBalancingV2::LoadBalancer",
          "Properties" : {
              "Name" : "HollowWebLB1",
              "Scheme" : "internet-facing",
              "SecurityGroups" : [{"Ref": "InstanceSecurityGroups"}],
              "Subnets" : [{"Ref":"Web1Subnet"}, {"Ref":"Web2Subnet"}],
              "Type" : "application"
          }
      },

      "HollowACL": {
        "Type": "AWS::WAFRegional::WebACL",
        "Properties": {
          "Name": "HollowACL",
          "DefaultAction": {
            "Type": "ALLOW"
          },
          "MetricName" : "HollowACL",
          "Rules": [
            {
              "Action" : {
                "Type" : "BLOCK"
              },
              "Priority" : 1,
              "RuleId" : { "Ref": "HollowRule" }
            }
          ]
        }
      },

      "WAFBlacklistSet": {
        "Type": "AWS::WAFRegional::IPSet",
        "Properties": {
          "Name": {
            "Fn::Join": [" - ", [{
              "Ref": "AWS::StackName"
            }, "Blacklist Set"]]
          },
          "IPSetDescriptors": [
            {
              "Type": "IPV4",
              "Value" : { "Ref" : "MyIPSetBlacklist" }
            }
          ]
        }
      },

      "HollowRule": {
        "Type" : "AWS::WAFRegional::Rule",
        "Properties": {
          "Name" : "HollowRule",
          "MetricName" : "MyIPRule",
          "Predicates" : [
            {
              "DataId" : { "Ref" : "WAFBlacklistSet" },
              "Negated" : false,
              "Type" : "IPMatch"
            }
          ]
        }
      },

      "ACLAssociation": {
        "Type": "AWS::WAFRegional::WebACLAssociation",
        "Properties": {
          "ResourceArn": {"Ref": "HollowWebLB1"},
          "WebACLId": {"Ref" : "HollowACL"}
        }
      }



    }

Summary

The AWS WAF product should likely be part of your perimeter security strategy somehow. Applications build for the cloud should be behind a load balancer and in different AZs for availability purposes and if you’re using native AWS services that means an ALB probably. Why not add some additional protection by adding a WAF as well. If you aren’t sure about how to create the rules you need, checkout the marketplace where there are pre-defined rules for many applications. Try them yourself and see what you come up with.

The post Add AWS Web Application Firewall to Protect your Apps appeared first on The IT Hollow.

]]>
https://theithollow.com/2018/08/20/add-aws-web-application-firewall-to-protect-your-apps/feed/ 0 9100
Using AWS CodeDeploy to Push New Versions of your Application https://theithollow.com/2018/08/06/using-aws-codedeploy-to-push-new-versions-of-your-application/ https://theithollow.com/2018/08/06/using-aws-codedeploy-to-push-new-versions-of-your-application/#respond Mon, 06 Aug 2018 14:04:33 +0000 https://theithollow.com/?p=9053 Getting new code onto our servers can be done in a myriad of ways these days. Configuration management tools can pull down new code, pipelines can run scripts across our fleets, or we could run around with a USB stick for the rest of our lives. With container based apps, serverless functions, and immutable infrastructure, […]

The post Using AWS CodeDeploy to Push New Versions of your Application appeared first on The IT Hollow.

]]>
Getting new code onto our servers can be done in a myriad of ways these days. Configuration management tools can pull down new code, pipelines can run scripts across our fleets, or we could run around with a USB stick for the rest of our lives. With container based apps, serverless functions, and immutable infrastructure, we’ve changed this conversation quite a bit as well. But what about a plain old server that needs a new version of code deployed on it? AWS CodeDeploy can help us to manage our software versions and rollbacks so that we have a consistent method to update our apps across multiple instances. This post will demonstrate how to get started with AWS CodeDeploy so that you can manage the deployment of new versions of your apps.

Setup IAM Roles

Before we start, I’ll assume that you’ve got a user account with administrator permissions so that you can deploy the necessary roles, servers and tools. After this, we need to start by setting up some permissions for CodeDeploy. First, we need to create a service role for CodeDeploy so that it can read tags applied to instances and take some actions for us. Go to the IAM console and click on the Roles tab. Then click “Create role”.

Choose AWS service for the trusted entity and then choose CodeDeploy.

After this, select the use case. For this post we’re deploying code on EC2 instances and not Lambda code, so select the “CodeDeploy” use case.

 

On the next screen choose the AWSCodeDeployRole policy.

On the last screen give it a descriptive name.

Now that we have a role, we need to add a new policy. While still in the IAM console, choose the policies tab and then click the “Create policy” button.

In the create policy screen, choose the JSON tab and enter the JSON seen below. This policy allows the assumed role to read from all S3 buckets. We’ll be attaching this policy to an instance profile and eventually our servers.

On the last screen, enter a name for the policy and then click the “Create policy” button.

 

 

Let’s create a second role now for this new policy.

This time select the “EC2” service so that our servers can access the S3 buckets.

 

On the attach permissions policies screen, select the policy we just created.

On the last screen give the role a name and click the “Create role” button.

 

Deploy Application Servers

Now that we’ve got that pesky permissions thing taken care of, it’s time to deploy our servers. You can deploy some EC2 instances any way you want, (I prefer CloudFormation personally) but for this post I’ll show you the important pieces in the AWS Console. Be sure to deploy your resources with the IAM role we created in the section prior to this. This instance profile gives the EC2 instance permissions to read from your S3 bucket where your code is stored.

As part of your server build, you’ll need to install the CodeDeploy agent. You can do this manually, or a better way might be to add the code below to the EC2 UserData field during deployment. NOTE: the [bucket-name] field comes from a pre-set list of buckets from AWS and is based on your region. See the list from here: https://docs.aws.amazon.com/codedeploy/latest/userguide/resource-kit.html#resource-kit-bucket-names

#!/bin/bash
yum -y update
yum install -y ruby
cd /home/ec2-user
curl -O https://bucket-name.s3.amazonaws.com/latest/install
chmod +x ./install
./install auto

 

Also when you’re deploying your servers, you’ll want to have a tag that can be referenced by the CodeDeploy service. This tag will be useful to identify which servers should receive the updates that we’ll push later. For this example, I’m using a tag named “App” and a value of “HollowWeb”. I’m deploying a pair of servers in different AZs behind a load balancer to ensure that  I’ve got excellent availability. Each of the servers will have this tag.

Once, the servers are deployed, you’ll want to deploy an app to make sure its all up and running correctly. NOTE: you could deploy the app for the first time through CodeDeploy if you’d like. I’m purposefully not doing this so that I can show how updates work and the first deployment isn’t as interesting so I’ve omitted it to keep this blog post to a reasonable length.

You can see my application is deployed by hitting the instance from a web browser. (Try not to be too impressed by version 0 here)

You can see from the EC2 console that I’ve created a target group for my load balancer and my two EC2 instances are associated with it and in a health state.

 

Create the App in CodeDeploy

Now it’s finally time to get to the meat of this post and talk through CodeDeploy. The first thing we’ll do is create an application within the CodeDeploy console. When you first open the CodeDeploy console from the AWS portal, you’ll probably see the familiar getting started page. Click that get started button and let’s get down to business.

You can do a sample deployment if you want to, but that’d hide some of the good bits, so we’ll choose a custom deployment instead. Click the “Skip Walkthrough” button.

Give your application a name that you’ll recognize. Then in the dropdown, select EC2/On-premises. This tells CodeDeploy that we’ll be updating servers, but we could also use this for Lambda functions if we wished. Then give the deployment group a name. This field will identify the group of servers that are part of the deployment.

Next up, you’ll select your deployment type. I’ve chosen an in-place deployment meaning that my servers will stay put, but my code will be copied on top of the existing server. Blue/green deployments are also available and would redeploy new instances during the deployment.

Next, we configure our environment. I’ve selected the Amazon EC2 instances tab and then entered that key/value pair from earlier in this post that identifies my apps. Remember the “App:HollowWeb” tag from earlier? Once you enter this, the console should show you the instances associated with this tag.

I’ve also checked the box to “Enable load balancing.” This is an optional setting for In-Place upgrades but mandatory for Blue/Green deployments. With this checked, CodeDeploy will block traffic to the instances currently being deployed until they are done updating and then they’ll be re-added to the load balancer.

Now you must select a deployment configuration. This tells CodeDeploy how to update your servers. Out of the box you can have it do:

  • One at a time
  • Half at a time
  • All at once

You can also create your own configuration if you have custom requirements not met by the defaults. For this example, I’m doing one at a time. You’ll then need to select a service role that has access to the instances, which we created early on during this blog post. Click the “Create Application” button to move on.

You should get a nice green “Congratulations!” message when you’re done. This message is pretty helpful and shows you the next steps to pushing your application.

Push your Code to S3

OK, now I’m going to push my code to S3 so that I can store it in a ready to go package. To do this, I’m opening up my development machine (my Mac laptop) and I’m updating my code. I’ve got a few changes to my website that I’m making including a new graphic and new index.html page. Also, within this repo, I’m going to create an appspec.yml file which is how we tell CodeDeploy what we want to do with our new code. Take a look at the repo with my files, and the appspec.yml file.

On my mac, I’ve got a directory with my appspec.yml in the root and two folders, content and scripts. I’ve placed my images and html files in the content directory, and I’ve put two bash scripts in the scripts directory. The scripts are very simple and tell the apache server to start or stop, depending on which script is called.

Now take a look at the appspec.yml details. This file is broken down into sections. There is a “files” section that describes where the files should be placed on our web servers. For example, you an see that the content/image001.png file from my repo should be placed within the /var/www/html directory on the web server.

version: 0.0
os: linux 
files:
  - source: content/image001.png
    destination: /var/www/html/
  - source: content/index.html
    destination: /var/www/html/
hooks:
  ApplicationStop: 
    - location: scripts/stop_server.sh
      timeout: 30
      runas: root
  ApplicationStart:
    - location: scripts/start_server.sh
      timeout: 30
      runas: root

Below this, you’ll see a “hooks” section. The hooks tell CodeDeploy what to do during each of the lifecycle events that occur during an update. There are a bunch of them as shown below.

I don’t need to use each of the lifecycle events for this demonstration, so I’m only using ApplicationStop and ApplicationStart. In the appspec.yml file I’ve defined the user who should execute the scripts and the location of the script to run.

TIP: You may find that the very first time you deploy your code, the ApplicationStop script won’t run. This is because the code has never been downloaded before so it can’t run yet. Subsequent runs use the previously downloaded script so if you change this code, it may take one run before it actually works again.

Since our new application looks tip top, it’s time to push it to our S3 bucket in AWS. We’ll run the command shown to us in the console earlier and specify the source location of our files and a destination for our archive file.

aws deploy push \
  --application-name HollowWebApp \
  --description "Version1" \
  --ignore-hidden-files \
  --s3-location s3://mybucketnamehere/AppDeploy1.zip \
  --source .

My exact command is show below, along with the return information. The information returned, tells you how you can push the deployment to your servers, and I recommend using this information to do this. However, in this post, we’ll show what happens when pushing code from the console so we can easily see what happens.

Once the code has been successfully pushed we’ll do a quick check to show that the zip file is in our S3 bucket, and it is!

Deploy the New Version

As mentioned, you can now deploy your code to your servers from the command line response you got from pushing your code to S3. To make it more obvious about what happened, lets take a look at the CodeDeploy console now though. You’ll notice that if you open up your application that there is a “Revision” listed in your list. As you push more versions to your S3 bucket, the list will grow here.

We’re ready to deploy, so click the arrow next to your revision to expand the properties. Click the “Deploy revision” button to kick things off.

Most of the information should be filled out for you on the next page, but it does give you a nice opportunity to tweak something before the code gets pushed. I for example selected the “Overwrite Files” option so that when I push out a new index.html, it will overwrite the existing file and not fail the deployment because of an error.

As your deployment is kicked off you can watch the progress in the console. To get more information, click the Deployment ID to dig deeper.

When we drill down into the deployment, we can see that one of my servers is “In progress” while the other is pending. Since I’m doing one at a time, only one of the instances will update for now. To see even more information about this specific instance deploy, click the “View events” link.

When we look at the events, we can see each of the lifecycle events that the deployment goes through. I’ve waited for a bit to show you that each event was successful.

When we got back to the deployment screen, we see that one server is done, and the next server has started its progression.

When both servers have completed, I check my app again, and I can see that a new version has been deployed. (A slightly better, yet still awful version)

The post Using AWS CodeDeploy to Push New Versions of your Application appeared first on The IT Hollow.

]]>
https://theithollow.com/2018/08/06/using-aws-codedeploy-to-push-new-versions-of-your-application/feed/ 0 9053
How to Setup Amazon EKS with Mac Client https://theithollow.com/2018/07/31/how-to-setup-amazon-eks-with-mac-client/ https://theithollow.com/2018/07/31/how-to-setup-amazon-eks-with-mac-client/#respond Tue, 31 Jul 2018 14:06:02 +0000 https://theithollow.com/?p=9022 We love Kubernetes. It’s becoming a critical platform for us to manage our containers, but deploying Kubernetes clusters is pretty tedious. Luckily for us, cloud providers such as AWS are helping to take care of these tedious tasks so we can focus on what is more important to us, like building apps. This post shows […]

The post How to Setup Amazon EKS with Mac Client appeared first on The IT Hollow.

]]>
We love Kubernetes. It’s becoming a critical platform for us to manage our containers, but deploying Kubernetes clusters is pretty tedious. Luckily for us, cloud providers such as AWS are helping to take care of these tedious tasks so we can focus on what is more important to us, like building apps. This post shows how you can go from a basic AWS account to a Kubernetes cluster for you to deploy your applications.

EKS Environment Setup

To get started, we’ll need to deploy an IAM Role in our AWS account that has permissions to manage Kubernetes clusters on our behalf. Once that’s done, we’ll deploy a new VPC in our account to house our EKS cluster. To speed things up, I’ve created a CloudFormation template to deploy the IAM role for us, and to call the sample Amazon VPC template. You’ll need to fill in the parameters for your environment.

NOTE: Be sure you’re in a region that supports EKS. As of the time of this writing the US regions that can use EKS are us-west-2 (Oregon) and us-east-1 (N. Virginia).

AWSTemplateFormatVersion: 2010-09-09
Description: 'EKS Setup - IAM Roles and Control Plane Cluster'

Metadata:

  "AWS::CloudFormation::Interface":
    ParameterGroups:
      - Label:
          default: VPC
        Parameters:
          - VPCCIDR
      - Label:
          default: Subnets
        Parameters:
          - Subnet01Block
          - Subnet02Block
          - Subnet03Block

Parameters:

  VPCCIDR:
    Type: String
    Description: VPC CIDR Address
    AllowedPattern: "(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})/(\\d{1,2})"
    ConstraintDescription: "Must be a valid IP CIDR range of the form x.x.x.x/x."

  Subnet01Block:
    Type: String
    Description: Subnet01
    AllowedPattern: "(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})/(\\d{1,2})"
    ConstraintDescription: "Must be a valid IP CIDR range of the form x.x.x.x/x."

  Subnet02Block:
    Type: String
    Description: Subnet02
    AllowedPattern: "(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})/(\\d{1,2})"
    ConstraintDescription: "Must be a valid IP CIDR range of the form x.x.x.x/x."

  Subnet03Block:
    Type: String
    Description: Subnet03
    AllowedPattern: "(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})/(\\d{1,2})"
    ConstraintDescription: "Must be a valid IP CIDR range of the form x.x.x.x/x."


Resources:

  EKSRole:
    Type: "AWS::IAM::Role"
    Properties:
      AssumeRolePolicyDocument:
        Version: "2012-10-17"
        Statement:
          -
            Effect: "Allow"
            Principal:
              Service:
                - "eks.amazonaws.com"
            Action:
              - "sts:AssumeRole"
      ManagedPolicyArns:
        - arn:aws:iam::aws:policy/AmazonEKSClusterPolicy
        - arn:aws:iam::aws:policy/AmazonEKSServicePolicy
      Path: "/"
      RoleName: "EKSRole"

  EKSVPC:
    Type: "AWS::CloudFormation::Stack"
    Properties:
      Parameters:
        VpcBlock: !Ref VPCCIDR
        Subnet01Block: !Ref Subnet01Block
        Subnet02Block: !Ref Subnet02Block
        Subnet03Block: !Ref Subnet03Block
      TemplateURL: https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-06-05/amazon-eks-vpc-sample.yaml
      TimeoutInMinutes: 10


Outputs:

  EKSRole:
    Value: !Ref EKSRole

  StackRef:
    Value: !Ref EKSVPC

  EKSSecurityGroup:
    Value: !GetAtt EKSVPC.Outputs.SecurityGroups

  EKSVPC:
    Value: !GetAtt EKSVPC.Outputs.VpcId

  EKSSubnets:
    Value: !GetAtt EKSVPC.Outputs.SubnetIds

Use the template above and go to the AWS CloudFormation console within your account. Deploy the template and fill in your VPC addressing for a new VPC and subnets.

After you’ve deployed the template, you’ll see two stacks that have been created. An IAM role and the EKS VPC/subnets.

Once the stack has been completed take note of the outputs which will be used for creating the cluster.

 

Create the Amazon EKS Control Cluster

Now that we’ve got an environment to work with, its time to deploy the Amazon EKS control cluster. To do this go to the AWS Console and open the EKS Service.

 

When you open the EKS console, you’ll notice that you don’t have any clusters created yet. We’re about to change that. Click the “Create Cluster” button.

Fill out the information about your new cluster. Give it a name and select the version. Next select the IAM Role, VPC, Subnets and Security Groups for your Kubernetes Control Plane cluster. This info can be found in the outputs from your CloudFormation Template used to create the environment.

 

You will see that your cluster is being created. This may take some time, so you can continue with this post to make sure you’ve installed some of the other tools that you’ll need to manage the cluster.

Eventually your cluster will be created and you’ll see a screen like this:

Setup the Tools

You’ll need a few client tools installed in order to manage the Kubernetes cluster on EKS. You’ll need to have the following tools installed:

  • AWS CLI v1.15.32 or higher
  • Kubectl
  • Heptio-authenticator-aws

The instructions below are to install the tools on a Mac OS client.

  • AWS CLI – The easiest way to install the AWS CLI on a mac is to use homebrew. If you’ve already got homebrew installed on your Mac, then skip over this. Otherwise run the following from a terminal in order to install homebrew.

/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"

Once homebrew is installed, you can use it to install the aws cli by running:

brew install awscli

After the AWS CLI has been installed, you’ll need to configure it with your Access Keys, Secrete Keys, regions and outputs. You can start this process by running AWS Configure.

  • kubectl – Installing the Amazon EKS-vended kubectl binary, download the kubectl executable for Mac through your terminal.

curl -o kubectl https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-06-05/bin/darwin/amd64/kubectl
chmod +x ./kubectl
cp ./kubectl $HOME/bin/kubectl && export PATH=$HOME/bin:$PATH
echo 'export PATH=$HOME/bin:$PATH' >> ~/.bash_profile

We should be able to check to make sure kubectl is working properly by checking the version from the terminal

kubectl version -o yaml

The result should look something like this:

  • Heptio-authenticator-aws – The Heptio Authenticator is used to integrate your AWS IAM settings with your Kubernetes RBAC permissions. To install this, run the following from your terminal

curl -o heptio-authenticator-aws https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-06-05/bin/darwin/amd64/heptio-authenticator-aws
chmod +x ./heptio-authenticator-aws
cp ./heptio-authenticator-aws $HOME/bin/heptio-authenticator-aws && export PATH=$HOME/bin:$PATH
echo 'export PATH=$HOME/bin:$PATH' >> ~/.bash_profile

Configure kubectl for EKS

Now that the tools are setup and the control cluster is deployed, we need to configure our kubeconfig file to use with EKS. To do this, the first thing to do is determine the cluster endpoint. You can do this by clicking on your cluster in the GUI and noting the server endpoint and the certificate information.

Note: you could also find this through the aws cli by:

aws eks describe-cluster --name [clustername] --query cluster.endpoint

aws eks describe-cluster --name [clustername] --query cluster.certificateAuthority.data

Next, create a new directory called .kube if it doesn’t already exist. Once that’s done you’ll need to create a new file in that directory named “config-“[clustername]. So in my case I’ll create a file called “config-theithollowk8s”. Copy and paste the text below into the config file and modify the endpoint-url, base64-encoded-ca-cert and cluster name fields with your own information we’ve collected above. Also, you may un-comment (remove the “#” signs) the settings for an IAM role, and AWS Profile if you’re using named profiles for your AWS CLI configuration. Those fields are optional.

apiVersion: v1
clusters:
- cluster:
    server: <endpoint-url>
    certificate-authority-data: <base64-encoded-ca-cert>
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: aws
  name: aws
current-context: aws
kind: Config
preferences: {}
users:
- name: aws
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      command: heptio-authenticator-aws
      args:
        - "token"
        - "-i"
        - "<cluster-name>"
        # - "-r"
        # - "<role-arn>"
      # env:
        # - name: AWS_PROFILE
        #   value: "<aws-profile>"

After you’ve created the config file, you’ll want to add an environment variable to the kubectl will know where to find the cluster configuration. On windows this can be done by running the command below. Substitute your own file paths for the config file that you created.

export KUBECONFIG=$KUBECONFIG:~/.kube/config-[clustername]

If things are working correctly, we can run kubectl config get-contexts so we can see the AWS authentication is working. I’ve also run kubectl get svc to show that we can read from the EKS cluster.

Deploy EKS Worker Nodes

Your control cluster is up and running, and we’ve got our clients connected through the Heptio-authenticator. Now its time to deploy some worker nodes for our containers to run on. To do this, go back to your cloudformation console and deploy the following CFn tempalate that is provided by AWS.

https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-06-05/amazon-eks-nodegroup.yaml

 

Fill out the deployment information. You’ll need to enter a stack name, number of min/max nodes to scale within, an SSH Key to use, the VPC we deployed earlier, and the subnets. You will also need to enter a ClusterName which must be exactly the same as our control plane cluster we deployed earlier. Also, the NodeImageID must be one of the following, depending on your region:

  • US-East-1 (N. Virginia) – ami-dea4d5a1
  • US-West-2 (Oregon) – ami-73a6e20b

Deploy your CloudFormation template and wait for it to complete. Once the stack completes, you’ll need to look at the outputs to get the NodeInstanceRole.

The last step is to ensure that our nodes have permissions and can join the cluster. Use the file format below and save it as aws-auth-cm.yaml.

apiVersion: v1
kind: ConfigMap
metadata:
  name: aws-auth
  namespace: kube-system
data:
  mapRoles: |
    - rolearn: <ARN of instance role (not instance profile)>
      username: system:node:{{EC2PrivateDNSName}}
      groups:
        - system:bootstrappers
        - system:nodes

Replace ONLY the <ARN of instance role (not instance profile)> section with the NodeInstanceRole we got from the outputs of our CloudFormation Stack. Save the file and then apply the configmap to your EKS cluster by running:

kubectl apply -f aws-auth-cm.yaml

After we run the command our cluster should be fully working. We can run “get nodes” to see the worker nodes listed in the cluster.

NOTE: the status of the cluster will initially show “NotReady” re-running the command or using the –watch switch will let you see when the nodes are fully provisioned.

Deploy Your Apps

Congratulations, you’ve build your Kubernetes cluster on Amazon EKS. Its time to deploy your apps which is outside of this blog post. If you want to try out an app to prove that its working, try one of the deployments from the kubernetes.io tutorials such as their guestbook app.

kubectl apply -f https://k8s.io/examples/application/guestbook/redis-master-deployment.yaml

From here on out, it’s up to you. Start deploying your replication controllers, pods, services, etc as you’d like.

 

The post How to Setup Amazon EKS with Mac Client appeared first on The IT Hollow.

]]>
https://theithollow.com/2018/07/31/how-to-setup-amazon-eks-with-mac-client/feed/ 0 9022
How to Setup Amazon EKS with Windows Client https://theithollow.com/2018/07/30/how-to-setup-amazon-eks-with-windows-client/ https://theithollow.com/2018/07/30/how-to-setup-amazon-eks-with-windows-client/#respond Mon, 30 Jul 2018 16:05:09 +0000 https://theithollow.com/?p=8980 We love Kubernetes. It’s becoming a critical platform for us to manage our containers, but deploying Kubernetes clusters is pretty tedious. Luckily for us, cloud providers such as AWS are helping to take care of these tedious tasks so we can focus on what is more important to us, like building apps. This post shows […]

The post How to Setup Amazon EKS with Windows Client appeared first on The IT Hollow.

]]>
We love Kubernetes. It’s becoming a critical platform for us to manage our containers, but deploying Kubernetes clusters is pretty tedious. Luckily for us, cloud providers such as AWS are helping to take care of these tedious tasks so we can focus on what is more important to us, like building apps. This post shows how you can go from a basic AWS account to a Kubernetes cluster for you to deploy your applications.

EKS Environment Setup

To get started, we’ll need to deploy an IAM Role in our AWS account that has permissions to manage Kubernetes clusters on our behalf. Once that’s done, we’ll deploy a new VPC in our account to house our EKS cluster. To speed things up, I’ve created a CloudFormation template to deploy the IAM role for us, and to call the sample Amazon VPC template to deploy a VPC. You’ll need to fill in the parameters for your environment.

NOTE: Be sure that you’re in a region that supports EKS. As of the time of this writing the US regions that can use EKS are us-west-2 (Oregon) and us-east-1 (N. Virginia).

AWSTemplateFormatVersion: 2010-09-09
Description: 'EKS Setup - IAM Roles and Control Plane Cluster'

Metadata:

  "AWS::CloudFormation::Interface":
    ParameterGroups:
      - Label:
          default: VPC
        Parameters:
          - VPCCIDR
      - Label:
          default: Subnets
        Parameters:
          - Subnet01Block
          - Subnet02Block
          - Subnet03Block

Parameters:

  VPCCIDR:
    Type: String
    Description: VPC CIDR Address
    AllowedPattern: "(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})/(\\d{1,2})"
    ConstraintDescription: "Must be a valid IP CIDR range of the form x.x.x.x/x."

  Subnet01Block:
    Type: String
    Description: Subnet01
    AllowedPattern: "(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})/(\\d{1,2})"
    ConstraintDescription: "Must be a valid IP CIDR range of the form x.x.x.x/x."

  Subnet02Block:
    Type: String
    Description: Subnet02
    AllowedPattern: "(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})/(\\d{1,2})"
    ConstraintDescription: "Must be a valid IP CIDR range of the form x.x.x.x/x."

  Subnet03Block:
    Type: String
    Description: Subnet03
    AllowedPattern: "(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})/(\\d{1,2})"
    ConstraintDescription: "Must be a valid IP CIDR range of the form x.x.x.x/x."


Resources:

  EKSRole:
    Type: "AWS::IAM::Role"
    Properties:
      AssumeRolePolicyDocument:
        Version: "2012-10-17"
        Statement:
          -
            Effect: "Allow"
            Principal:
              Service:
                - "eks.amazonaws.com"
            Action:
              - "sts:AssumeRole"
      ManagedPolicyArns:
        - arn:aws:iam::aws:policy/AmazonEKSClusterPolicy
        - arn:aws:iam::aws:policy/AmazonEKSServicePolicy
      Path: "/"
      RoleName: "EKSRole"

  EKSVPC:
    Type: "AWS::CloudFormation::Stack"
    Properties:
      Parameters:
        VpcBlock: !Ref VPCCIDR
        Subnet01Block: !Ref Subnet01Block
        Subnet02Block: !Ref Subnet02Block
        Subnet03Block: !Ref Subnet03Block
      TemplateURL: https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-06-05/amazon-eks-vpc-sample.yaml
      TimeoutInMinutes: 10


Outputs:

  EKSRole:
    Value: !Ref EKSRole

  StackRef:
    Value: !Ref EKSVPC

  EKSSecurityGroup:
    Value: !GetAtt EKSVPC.Outputs.SecurityGroups

  EKSVPC:
    Value: !GetAtt EKSVPC.Outputs.VpcId

  EKSSubnets:
    Value: !GetAtt EKSVPC.Outputs.SubnetIds

Use the template above and go to the AWS CloudFormation console within your account. Deploy the template and fill in your VPC addressing for a new VPC and subnets.

After you’ve deployed the template, you’ll see two stacks that have been created. An IAM role and the EKS VPC/subnets.

Once the stack has been completed take note of the outputs which will be used for creating the cluster.

 

Create the Amazon EKS Control Cluster

Now that we’ve got an environment to work with, its time to deploy the Amazon EKS control cluster. To do this go to the AWS Console and open the EKS Service.

 

When you open the EKS console, you’ll notice that you don’t have any clusters created yet. We’re about to change that. Click the “Create Cluster” button.

Fill out the information about your new cluster. Give it a name and select the version. Next select the IAM Role, VPC, Subnets and Security Groups for your Kubernetes Control Plane cluster. This info can be found in the outputs from your CloudFormation Template used to create the environment.

 

You will see that your cluster is being created. This may take some time, so you can continue with this post to make sure you’ve installed some of the other tools that you’ll need to manage the cluster.

Eventually your cluster will be created and you’ll see a screen like this:

Setup the Tools

You’ll need a few client tools installed in order to manage the Kubernetes cluster on EKS. You’ll need to have the following tools installed:

  • AWS CLI v1.15.32 or higher
  • Kubectl
  • Heptio-authenticator-aws

The instructions below are to install the tools on a Windows client.

  • AWS CLI – To install the AWS CLI, download and run the installer for your version of windows. 64-bit version , 32-bit version. Once you’ve completed running the installer, you’ll need to configure your client with the appropriate settings such as region, access_keys, secret_keys and an output format. This can be accomplished by opening up a cmd prompt and running aws configure. Enter access keys and secret keys with permissions to your AWS resources.

kubectl version -o yaml

The result should look something like this:

heptio-authenticator-aws --help

The result of that command should return info about your options.

 

Configure kubectl for EKS

Now that the tools are setup and the control cluster is deployed, we need to configure our kubeconfig file to use with EKS. To do this, the first thing to do is determine the cluster endpoint. You can do this by clicking on your cluster in the GUI and noting the server endpoint and the certificate information.

Note: you could also find this through the aws cli by:

aws eks describe-cluster --name [clustername] --query cluster.endpoint

aws eks describe-cluster --name [clustername] --query cluster.certificateAuthority.data

Next, create a new directory called .kube, if it doesn’t already exist. Once that’s done you’ll need to create a new file in that directory named “config-“[clustername]. So in my case I’ll create a file called “config-theithollowk8s”. Copy and paste the text below into the config file and modify the endpoint-url, base64-encoded-ca-cert and cluster name fields with your own information we’ve collected above. Also, you may un-comment (remove the “#” signs) the settings for an IAM role, and AWS Profile if you’re using named profiles for your AWS CLI configuration. Those fields are optional.

apiVersion: v1
clusters:
- cluster:
    server: <endpoint-url>
    certificate-authority-data: <base64-encoded-ca-cert>
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: aws
  name: aws
current-context: aws
kind: Config
preferences: {}
users:
- name: aws
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      command: heptio-authenticator-aws
      args:
        - "token"
        - "-i"
        - "<cluster-name>"
        # - "-r"
        # - "<role-arn>"
      # env:
        # - name: AWS_PROFILE
        #   value: "<aws-profile>"

After you’ve created the config file, you’ll want to add an environment variable to the kubectl will know where to find the cluster configuration. On Windows this can be done by running the command below. Substitute your own file paths for the config file that you created.

set KUBECONFIG=C:\FilePATH\.kube\config-theithollowk8s.txt

If things are working correctly, we can run kubectl config get-contexts so we can see the AWS authentication is working. I’ve also run kubectl get svc to show that we can read from the EKS cluster.

Deploy EKS Worker Nodes

Our control cluster is up and running, and we’ve got our clients connected through the Heptio-authenticator. Now its time to deploy some worker nodes for our containers to run on. To do this, go back to your CloudFormation console and deploy the following CFn template that is provided by AWS.

https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-06-05/amazon-eks-nodegroup.yaml

 

Fill out the deployment information. You’ll need to enter a stack name, number of min/max nodes to scale within, an SSH Key to use, the VPC we deployed earlier, and the subnets. You will also need to enter a ClusterName which must be exactly the same as our control plane cluster we deployed earlier. Lastly, the NodeImageID must be one of the following, depending on your region:

  • US-East-1 (N. Virginia) – ami-dea4d5a1
  • US-West-2 (Oregon) – ami-73a6e20b

Deploy your CloudFormation template and wait for it to complete. Once the stack completes, you’ll need to look at the outputs to get the NodeInstanceRole.

The last step is to ensure that our nodes have permissions and can join the cluster. Use the file format below and save it as aws-auth-cm.yaml.

apiVersion: v1
kind: ConfigMap
metadata:
  name: aws-auth
  namespace: kube-system
data:
  mapRoles: |
    - rolearn: <ARN of instance role (not instance profile)>
      username: system:node:{{EC2PrivateDNSName}}
      groups:
        - system:bootstrappers
        - system:nodes

Replace ONLY the <ARN of instance role (not instance profile)> section with the NodeInstanceRole we got from the outputs of our CloudFormation stack. Save the file and then apply the configmap to your EKS cluster by running:

kubectl apply -f aws-auth-cm.yaml

After we run the command our cluster should be fully working. We can run “get nodes” to see the worker nodes listed in the cluster.

NOTE: the status of the cluster will initially show “NotReady”. Re-running the command or using the –watch switch will let you see when the nodes are fully provisioned.

Deploy Your Apps

Congratulations, you’ve build your Kubernetes cluster on Amazon EKS. Its time to deploy your apps which is outside of this blog post. If you want to try out an app to prove that its working, try one of the deployments from the kubernetes.io tutorials such as their guestbook app.

kubectl apply -f https://k8s.io/examples/application/guestbook/redis-master-deployment.yaml

From here on out, it’s up to you. Start deploying your replication controllers, pods, services, etc as you’d like.

 

The post How to Setup Amazon EKS with Windows Client appeared first on The IT Hollow.

]]>
https://theithollow.com/2018/07/30/how-to-setup-amazon-eks-with-windows-client/feed/ 0 8980
Easy Snapshot Automation with Amazon Data Lifecycle Manager https://theithollow.com/2018/07/23/easy-snapshot-automation-with-amazon-data-lifecycle-manager/ https://theithollow.com/2018/07/23/easy-snapshot-automation-with-amazon-data-lifecycle-manager/#respond Mon, 23 Jul 2018 14:05:53 +0000 https://theithollow.com/?p=8964 Amazon has announced a new service that will help customers manage their EBS volume snapshots in a very simple manner. The Data Lifecycle Manager service lets you setup a schedule to snapshot any of your EBS volumes during a specified time window. In the past, AWS customers might need to come up with their own […]

The post Easy Snapshot Automation with Amazon Data Lifecycle Manager appeared first on The IT Hollow.

]]>
Amazon has announced a new service that will help customers manage their EBS volume snapshots in a very simple manner. The Data Lifecycle Manager service lets you setup a schedule to snapshot any of your EBS volumes during a specified time window.

In the past, AWS customers might need to come up with their own solution for snapshots or backups. Some apps moving to the cloud might not even need backups based on their deployment method and architectures. For everything else, we assume we’ll need to at least snapshot the EBS volumes that the EC2 instances are running on. Prior to the Data Lifecycle Manager, this could be accomplished through some fairly simple Lambda functions to snapshot volumes on a schedule. Now with the new service, there is a solution right in the EC2 console.

Using the Data Lifecycle Manager

To begin using the new service, open the EC2 console in your AWS account. If this is the first time using it, you’ll click the “Create Snapshot Lifecycle Policy” button to get started.

We’ll create a new policy which defines what volumes should be snapshotted and when to take these snapshots. First, give the policy a description so you’ll be able to recognize it later. The next piece is to identify which volume should be snapshotted. This is done using a tag on the volume (not the EC2 instance its connected to). I’ve used a method that snapshots EBS volumes with a tag key of “snap” and a tag value of “true”.

Next, we’ll need to define the schedule in which the volumes will be snapshotted. Give that schedule a name and then specify how often the snapshots will be taken. In this example, I’m taking a snapshot every 12 hours. The first snapshots need to know when to be initiated. Be sure to note that this time is UTC time, so do your conversions before you start with this process. After this, you’ll need to specify how many of the snapshots to keep. Its a bad idea to start taking lots of snapshots and not deleting them ever, especially in the cloud where you can keep as many as you’d like if you can stomach the bill.

Note: The snapshot start time is a general start time. The snapshots will be taken sometime within the hour you specify, but don’t expect that it will be immediately at this time.

 

You’ll also have the option to tag your snapshots. It probably makes sense to tag them somehow so that you know which ones might have been taken manually, and which were automated through the Data Lifecycle Manager. I’ve tagged mine with a key name LifeCycleManager and a value of true.

 

Lastly, you’ll need a role created that has permissions to create and delete these snapshots. Luckily, there is a “Default role” option in the console that will create this for you. Otherwise you can specify the role yourself.

After you create the policy, you’ll see it listed in your console. Its also worth noting that you could have multiple policies affecting the same volumes. For instance if you wanted to take snapshots every 6 hours, maybe you create a pair of policies since the highest frequency of snapshots that is available, is currently twelve hours.

The Results

If you wait for a bit, your snapshots should be taken you’ll notice that any of your EBS volumes that were properly tagged will be snapshotted. You can also see in the screenshot below, that the snapshot has the tag that I specified along with a few others that identify which policy created the snapshot.

Summary

The Data Lifecycle Manager service from AWS might not seem like a big deal, but its a lot nicer than having to write your own Lambda functions to snapshot and delete the snapshots on a schedule. Don’t worry though, you might still get some use out of your old Lambda code if you want to customize your snapshot methods or do something like create an AMI. If you’re just looking for the basics, try out the Data Lifecycle Manager. Right now you can test this out yourself in the N. Virginia, Oregon, and Ireleand regions through the AWS console or through the CLI. I expect this will be available in other regions and through CloudFormation shortly as well.

The post Easy Snapshot Automation with Amazon Data Lifecycle Manager appeared first on The IT Hollow.

]]>
https://theithollow.com/2018/07/23/easy-snapshot-automation-with-amazon-data-lifecycle-manager/feed/ 0 8964
Should I use a Transit VPC in AWS? https://theithollow.com/2018/07/16/should-i-use-a-transit-vpc-in-aws/ https://theithollow.com/2018/07/16/should-i-use-a-transit-vpc-in-aws/#comments Mon, 16 Jul 2018 14:05:46 +0000 https://theithollow.com/?p=8942 A common question that comes up during AWS designs is, “Should I use a transit VPC?” The answer, like all good IT riddles is, “it depends.” There are a series of questions that you must ask yourself before deciding whether to use a Transit VPC or not. In this post, I’ll try to help formulate […]

The post Should I use a Transit VPC in AWS? appeared first on The IT Hollow.

]]>
A common question that comes up during AWS designs is, “Should I use a transit VPC?” The answer, like all good IT riddles is, “it depends.” There are a series of questions that you must ask yourself before deciding whether to use a Transit VPC or not. In this post, I’ll try to help formulate those questions so you can answer this question yourself.

The Basics

Before we can ask those tough questions, we first should answer the question, “What is a Transit VPC?” Well, a transit VPC acts as an intermediary for routing between two places. Just like a transit network bridges traffic between two networks, a transit VPC ferries traffic between two VPCs or perhaps your data center.

There isn’t a product that you buy called a transit VPC, but rather a transit VPC is a reference architecture. Multiple products can be used to build this transit VPC, but the really good ones have a method to add some automation to the process. AWS’s website highlights Aviatrix and Cisco solutions, but I’ve also seen Palo Alto firewalls used as well. Really any virtual router should be able to be used with this process, so you pick your favorite solution.

The reference architecture uses a pair of virtual routers split between Availability zones. Routing between VPCs, etc would spin up a VPN tunnel to the transit routers so that routing can then be controlled through these routers installed on ec2 instances.

 

Why Might I want a Transit VPC?

Now that we know what a Transit VPC is, what use cases might warrant me using a transit VPC?

Simplify Networking

If you’ve got a multi-account, multi-VPC strategy for your deployments, connecting all of those VPCs together can be a real mess. If you’re implementing peering connections for a full mesh, the formula to calculate that is: [(n-1)*n]/2. Setting this up and managing it can be a real chore. Take a look at the below example to see how you can quickly get overwhelmed by the number of connections to maintain. Think how this changes every time you add a new VPC.

Now we can avoid some of this complexity by moving to a hub and spoke model, where the Transit VPC is the hub. We still need to setup connections, but we can at least manage our connections to on-prem through a centralized location. Also, what would happen if we added a new VPC to the hub and spoke model? That’s right, one new connection back to transit instead of modifying all connections across every VPC.

 

 

Funnel Traffic

How do you want to manage your traffic out to the Internet, or in from the Internet? Do you allow traffic in and out of each of your VPCs? You can certainly do just that, but how comfortable are you with having multiple egress or ingress points to your environment? The diagram below shows a full mesh of VPCs again, this time adding a red line for an Internet connection and a connection back to your on-premises data center.

NOTE: refusing to draw diagrams like the one below is another valid use case for moving to a hub/spoke model instead of mesh.

Many times I hear the need to place restrictions on Internet ingress traffic such as packet inspection, IDS/IPS, etc. AWS provides a Web Application Firewall, which is nice, but some corporations will require something more than that. On the flip side, some companies require things like content filtering for all outbound Internet traffic. Does it make sense to deploy content filtering solutions in each of your VPCs, or should you centralize it in a single place, like a Transit VPC? The hub spoke model allows us to funnel all of our traffic through the Transit where a firewall or other device might be able to take action on the traffic. Then a single ingress/egress point can be managed in/out from the Internet and to the corporate data center.

 

Why Shouldn’t I use a Transit VPC?

Costs

A drawback to using a Transit VPC is costs. A Transit VPC will have a couple of different effects on your AWS cloud costs. The first is obviously the need to deploy a pair of EC2 instances in the transit VPC. You’ll be responsible for these two instances running as either On-Demand or Reserved Instances to save on costs. In addition to the EC2 costs, you might also need to purchase a license from Cisco, Aviatrix, etc., to run their software. These costs are pretty easy to calculate, once you size your instances appropriately.

A more difficult cost consideration is around your network traffic. AWS charges you for any network traffic exiting (egress) your VPC. In the diagram below (left), you can see how this works with a single VPC directly accessing the Internet. On the right side, you can see what happens to egress costs when you have a transit VPC instead. With a transit VPC we’d get billed twice for the traffic because it exits two VPCs. Keep this in mind for traffic inter-VPC as well, between a Shared Services and Prod environment for example.

 

One other cost consideration is your VPN tunnels. AWS also charges for VPN tunnel connection hours. The transit VPC relies on VPN tunnels to spoke VPCs to provide the overlay networking. These VPN tunnels come with a small charge per month to have them up and running.

 

Its Traditional Data Center Methodologies

One of the bigger considerations to contemplate is how you want to operate your cloud. Some of the benefits of cloud are things like Infrastructure-as-Code (IAC). Developers may be looking to spin up their resources and have them access the Internet through their own means. Application code and infrastructure code is coupled together to provide a full solution all on their own. If you add a transit VPC to the mix to provide service chaining for example, the application developers may need to open a ticket to have a network engineer setup NAT on the transit routers. Or perhaps open up some ACLs within the router so that the app can talk to the outside. This is a common scenario in on-premises environments, so nothing new, but do we need to change how we think when moving to the cloud? This ideology is a conversation that should happen early on when developing a cloud strategy.

 

Summary

Transit VPCs are a pretty nifty solution to provide some controls to your AWS cloud. There are pros and cons for using a transit VPC but hopefully this post has shown you what sort of things should be discussed and considered before jumping in to your architecture designs. The table below should help formulate your decisions.

ConsiderationNo Transit VPC UsedTransit VPC Used
Connections to managen(n-1)/2n-1
Additional EC2 costsnone2 Transit Routers
Additional Egress costsPay for egress costPay for egress costs twice
How to add Packet InspectionDeploy a pair of IDS/IPS within each VPC NecessaryDeploy a pair of IDS/IPS into Transit Only
Provide access to applicationsConfigure infrastructure through IaC onlyDeploy app and request ACLs/NAT to be configured
Data Center ConnectivityVPN or Direct Connect setup to each deploy VPCVPN or Direct Connect setup to the transit VPC only

A transit VPC should be considered early on so that a retrofit isn’t required for your cloud environment.

The post Should I use a Transit VPC in AWS? appeared first on The IT Hollow.

]]>
https://theithollow.com/2018/07/16/should-i-use-a-transit-vpc-in-aws/feed/ 1 8942
Who is Heptio? https://theithollow.com/2018/07/09/who-is-heptio/ https://theithollow.com/2018/07/09/who-is-heptio/#respond Mon, 09 Jul 2018 14:00:53 +0000 https://theithollow.com/?p=8918 There are a dozen new technologies being introduced every day that never amount to anything, while others move on to create completely new methodologies for how we interact with IT. Just like virtualization changed the way data centers operate, containers are changing how we interact with our applications and Kubernetes (K8s in short hand) seems […]

The post Who is Heptio? appeared first on The IT Hollow.

]]>
There are a dozen new technologies being introduced every day that never amount to anything, while others move on to create completely new methodologies for how we interact with IT. Just like virtualization changed the way data centers operate, containers are changing how we interact with our applications and Kubernetes (K8s in short hand) seems to be a front runner in this space. However, with any new technology hitting the market, there is a bit of a lag before it takes off. People have to understand why it’s needed, who’s got the best solution, and how you can make it work with your own environment. Heptio is a startup company focusing on helping enterprises embrace Kubernetes through their open source tools and professional services. I’ve been hearing great things about Heptio, but when my good friend, Tim Carr, decided to go work for there, I decided that I better look into who they are, and figure out what he sees in their little startup.

Heptio was co-founded by Craig McLuckie and Joe Beda  who were founding engineers of the Kubernetes project for Google and probably understand Kubernetes better than most (I would assume). They’re taking their knowledge and building tools to help customers adopt Kubernetes in a stable, secure, reliable way across platforms.

What Exactly Does Heptio Sell?

Heptio currently sells services to customers who need help with their Kubernetes deployments. This might come in the form of consulting services, or their Heptio Kubernetes Subscription (HKS). In the grand scheme of things, Kubernetes is still fairly new. Companies have started developing code on the K8s platform and love how easy it is to run containers on it, but management is still difficult. There are a lot more questions that enterprises need to solve than just, “How do I spin up an application on Kubernetes?” For the enterprise, additional considerations need to be considered such as, How do we ensure our cluster is backed up to meet our RTO and RPO requirements? How do we ensure that we’ve got proper role based access controls in place and separation of duties with our cluster? How do we patch our K8s cluster, and get visibility into possible issues that arise?

Heptio’s main product is their Heptio Kubernetes Subscription or HKS. You pick where you want your Kubernetes cluster to live, such as in AWS, GCE, Azure, on-prem, or whatever you’d like. Heptio will manage that cluster using many of the open source tools that they helped create. You make the choices that make sense to your organization for platform portability, and Heptio will make sure its backed up, conformant with CNCF standards, patched and updated.

The HKS solution also comes with advisory support so that you can ask questions as you are building your environment and of course, break-fix for when issues arise. If you’re new to managing an enterprise Kubernetes deployment, this is a great way to make sure you’ve got the basics covered.

Where have I heard of Heptio before?

If you’ve worked with Kubernetes very much, you may have used some of their open sourced products, and there are a number of them.

Heptio Ark – Ark is a tool to manage disaster recovery of K8s cluster resources. This tool creates a simple way to recover resources if something happens to your K8s clusters, or if you need a simple way to migrate your resources to a different cluster.

 

Heptio Contour – Contour helps manage your Envoy (opensource project created and used by Lyft) load balancer for your K8s deployments. It helps deploy Envoy into your environment and keep it updated as new downstream containers change state.

 

Heptio Gimbal – Gimbal is a neat tool that lets you route traffic to one or more of your K8s clusters from the internet, or to legacy systems such as an OpenStack cluster, or both.

 

Heptio Sonobuoy – Sonobouy might be the most well known project. This tool makes sure that your K8s installation is conformant with the official Kubernetes specifications. The Cloud Native Computing Foundation (CNCF) has set standards for what each K8s deployment must provide to consumers. If two vendors both have Kubernetes distributions, you’d want to ensure that both of these allow the same set of minimum capabilities so that you can move your containers between the platforms. Sonobuoy is the defacto standard to report on these conformant metrics.

 

ksonnet – Ksonnet is a neat way to manage your k8s manifests. It provides a quick way to start with your templates by generating much of the code for you. It also has library so you an use tools like VSCode with auto-complete turned on. This is typically an easier way to manage code than YAML or JSON files.

 

Heptio Authenticator for AWS – This was a project completed by both Heptio and AWS to allow Amazon’s IAM service to provide authentication to a Kubernetes cluster. If you’re an AWS customer running K8s, this is a big deal for you.

 

What’s up with the Name?

If you’re curious about the company name, you’re not alone. Geekwire was able to interview Mr. Beda in 2016 to find out that the name is in reference to the original Kubernetes build inside of Google before it was open sourced. Before Kubernetes was a thing, Google called it the Borg. When it was being named, it was pitched as 7 of 9 which is in reference to a character from Star Trek Voyager. “Hept” is a greek prefix for “seven” and thus Heptio is a nod to the original project that McLuckie and Beda helped to create during their time with Google.

 

 

Summary

In a nutshell, there seems to be a bunch of talent behind this little startup and there has been enough financial backing too, to make this company take off. Last September Heptio had a $25M Series B funding round and that’s nothing to shake a stick at. With their propensity to work with the open source communities to build new tools, and provide expert knowledge to companies on the Kubernetes deployments, there’s no telling how far this little start up could go. Good luck to them, and we’ll be watching to see where things lead.

 

The post Who is Heptio? appeared first on The IT Hollow.

]]>
https://theithollow.com/2018/07/09/who-is-heptio/feed/ 0 8918
The Dark Side of Stress https://theithollow.com/2018/06/18/the-dark-side-of-stress/ https://theithollow.com/2018/06/18/the-dark-side-of-stress/#comments Mon, 18 Jun 2018 14:06:40 +0000 https://theithollow.com/?p=8902 I took last week off from work to spend some time with my family and just relax. I’d never been to Disney World and have a six year old who is seriously into Star Wars, so this sounded like a great way to take a relaxing week off. During this vacation I found that it […]

The post The Dark Side of Stress appeared first on The IT Hollow.

]]>
I took last week off from work to spend some time with my family and just relax. I’d never been to Disney World and have a six year old who is seriously into Star Wars, so this sounded like a great way to take a relaxing week off. During this vacation I found that it took several days before I even started to unwind. I ended the work week on a Friday and still felt the work stress through the weekend and into Monday. Maybe it’s a normal thing to still feel the stress through the weekend, but I had expected to feel an immediate release of tension when I was done with work on Friday when my vacation began. But all weekend I kept noticing that I couldn’t forget about work. In fact, I felt pretty sick one day and believe it was stress related. After a few days I started to pay attention to the activities of the day and didn’t pay as much attention, but it made me think that those two day weekends and how they certainly weren’t recharging me to be prepared for the next week of stress.

Zeigarnik Effect

I did what any geek would do, and googled some stuff about why I felt like I did, and I found that there is a psychological phenomenon called the Zeigarnik Effect. This effect is a person’s ability to remember unfinished tasks much more easily than completed tasks. A Russian psychologist, Bluma Zeigarnik, started studying the phenomenon after seeing that waiters/waitresses could pretty accurately remember open orders, but couldn’t answer questions about orders that had been completed. The study concluded with Zeigarnik and her professor Kurt Lewin showing that unfinished tasks created a task-specific tension that allowed your brain to keep focusing on it and relieving that tension when it was complete.

Coping with Zeigarnik

For me, knowing why my brain was still clinging to work for a few days was helpful, since not knowing why I couldn’t de-stress was also stressing me out. Was I broken? Is there something wrong with me that I obsess over things from work even when I’m taking a break? I found myself actively trying to let go and relax, and the fact that I had to keep forcing myself to do this was also stressing me out. Knowing that there was a reason for my obsessiveness was really helpful. It wasn’t my fault exactly.

Now that we know why we sometimes feel this way, maybe we can counteract the effect or hack our brains to short circuit it. It seems as though our brains can only hold so many tasks in our active memory at at time. So when you’re on vacation or at home on the weekend, try doing something that takes some focus. I’ve found that when I think I’m trying to relax, like watching the Cubs (the best baseball team on the planet) on TV I still feel stress, but if I am actually playing baseball or video games, I forget about work for a brief time. A focused activity might be the thing you need to stop thinking about all of those open tasks you’ve still got to get back to. During my vacation, I got busy doing Disney stuff, like meeting Kylo Ren, and I eventually forgot about work for a little while.

What about when we’re not on vacation but need a break? Like after a long day of work? Try keeping a list of tasks in a journal or in Jira, Trello, Todoist, or whatever. The act of writing down your list of todos has always helped me feel less stressed. Perhaps the act of keeping all of my tasks straight is an unresolved task that my brain can’t stop thinking about. Write them down and that’s one task your brain doesn’t have to keep obsessing over. Also, if you break those bigger tasks into a bunch of smaller tasks, you’ll start crossing them off much faster and hopefully reliving some of your stress. Remember, your list of tasks will still be there for you tomorrow in your task list, so stop thinking about them for the evening.

Use Zeigarnik to your Advantage

As an alternative to letting go of your open tasks, you can use those open tasks to get them done. If you are a person who leaves things until the last minute before a deadline and want to break that routine, try starting the task right away, even if you don’t finish it. According to the Zeigarnik effect, you should keep thinking about it until you complete it so you’ll be incentivized to get it done and relieve that task tension.

Schedule breaks in your studying. Remember, that if you’re in the middle of your studying and it isn’t finished, taking a break doesn’t mean that you’re not still thinking about stuff. Breaks have been shown to help the learning process and organize thoughts. Have you ever been working very hard on a something and finally had to stop, only to have the answer magically pop into your head while you’re doing something else? I know this happens to me all the time, so breaks don’t mean you’re weak, they’re good for you and the process.

A Note to Employers/Managers

I know that we can’t always help it, but you should know how this psychological phenomenon affects your employees. We learned from The Phoenix Project  that work in progress (WIP) is bad for production lines, IT development, and flows in general. But, knowing about the Zeigarnik effect, we also know that having too much work in progress affects your employee’s stress levels. Difficult tasks, are one thing, but too many tasks at once may make your employees burn out or even quit to get rid of the task related tension. If you’re trying to retain employees and improve employee satisfaction, limit your work in progress and you should have faster turnover times and happier employees.

Summary

Decrease your stress levels by managing your open task items. Complete them quickly, write them down, cross them off, or eliminate them to begin with to reduce your task based tension. Remember to also add tasks for things you plan to study. Do you know, Cloud, DevOps, Product A, Virtualization, containers, etc? Pick what you plan to learn and add those to your task list, don’t leave them in your head.

When you aren’t able to manage your own work in progress, try tricking your brain by engaging in activities that make you focus on another task for a while.

I know I’m not the first person to write about this effect but really hope to help someone else within our field to cut back on the stress levels. This post from Eric Lee got my attention and I think there is plenty of stress in our industry and I hope this helps you to reduce yours. Thanks for reading.

 

The post The Dark Side of Stress appeared first on The IT Hollow.

]]>
https://theithollow.com/2018/06/18/the-dark-side-of-stress/feed/ 2 8902
Use Hashicorp’s Vault to Dynamically Create Mysql Credentials https://theithollow.com/2018/06/04/use-hashicorps-vault-to-dynamically-create-mysql-credentials/ https://theithollow.com/2018/06/04/use-hashicorps-vault-to-dynamically-create-mysql-credentials/#respond Mon, 04 Jun 2018 15:26:01 +0000 https://theithollow.com/?p=8881 Passwords are a necessary evil to keep bandits from running away with your confidential data. We’ve come up with various strategies to manage these secrets, such as: Using one password for all of your stuff so you don’t forget it. Use a password vault to store a unique password for each of your logins. Use […]

The post Use Hashicorp’s Vault to Dynamically Create Mysql Credentials appeared first on The IT Hollow.

]]>
Passwords are a necessary evil to keep bandits from running away with your confidential data. We’ve come up with various strategies to manage these secrets, such as:

  • Using one password for all of your stuff so you don’t forget it.
  • Use a password vault to store a unique password for each of your logins.
  • Use a few passwords in a pattern you can remember.
  • Write down your password on a sticky note and attach it to your monitor.

Now, not all of these practices are good but you get the idea.

What do we do about passwords in an enterprise? We should be using unique passwords for every login, but also every service account. This usually leads to a password vault of some sort, but wouldn’t it be more secure if you generated a new password with a set lifetime and only generated it when we needed? Hashicorp has a tool called “Vault” that lets us build these dynamic secrets at will so that we can use it with our applications or temporary user access. For this post, we’ll create dynamic logins to a mysql database so that a flask app will be able to use it for its database backend. In your lab, you could use this for anything that needed access to a mysql database including a user that just need temporary access.

Vault Prerequisites

Before we can get started with this we’ve already deployed a few of the resources. First, I’ve deployed a Vault server and I’m using a Hashicorp Consul server as a backend for Vault. To be totally honest, I’ve deployed three Vault servers and have Consul installed on those same servers but your environment may vary depending on your availability and performance requirements. I’ve also unsealed Vault and logged in with a user with permissions. Next, I’ve deployed a mysql server with an admin user named “Vault”. Lastly, I’ve deployed a flask app on a server and connected it to the mysql server for its database instance. You can see the  basic flask app below and that it accepted a login and a single “task” entry stored in the database.

Configuring Vault for Dynamic Secrets

The boring infrastructure setup stuff is done and we’re ready to configure Vault to dynamically create mysql logins when we need them.

The first thing I’d want to do is to enable the database capabilities. I can do that by running the following command:

vault secrete enable database

If you’ve got the console open, you’ll notice that you can see this in your web browser:

Now that we’ve enabled our database secrets, we need to configure vault to talk to our mysql database. To do that we need to tell the database engine which plugin to use, and the connection information. Remember, I created a “vault” user on the mysql database already so that the Vault software could log in for us.

To setup the configuration we’ll run this from the Vault command line:

vault write database/config/hollowdb \
	plugin_name=mysql-database-plugin \
	connection_url="{{username}}:{{password}}@tcp(mysql.hollow.local:3306)/" \
	allowed_roles="mysqlrole" \
	username="vault" \
	password="QAZxswedc"

The “write database/config/hollowdb” line is where we’ll store the config within the vault server. The name of my database is hollowdb so that’s where I’m storing it. Whats important is storing it within database/config. You’ll also notice that there is a connection url to the server/database and we’ve added a username and password to fill in there. Don’t worry, that password is garbage and has since been deleted. The allowed roles we’ll configure in a moment, for now just give it a name.

Now, as you might have guessed, we’re going to configure the role. The role maps a name within Vault to a SQL statement to create the user within the mysql database. The code below is the role that I’ve created.

vault write database/roles/mysqlrole \
    db_name=hollowdb \
    creation_statements="CREATE USER '{{name}}'@'%' IDENTIFIED BY '{{password}}';GRANT ALL PRIVILEGES ON hollowapp.* TO '{{name}}'@'%';" \
    default_ttl="1h" \
    max_ttl="24h"

in this command, we create a new role that matches our earlier configuration to the database. Then we add a SQL statement that takes our username and [in this case] creates the user in the mysql database and grants all permissions on the hollowapp database. Also take note that we have a default time-to-live of 1 hour which can be extended to 24 hours. After this, Vault will revoke the credentials.

At this point we’re ready to see some of the magic happen.

Testing our Dynamic Secrets

So to test out what we’ve built, we’ll first take a look at the database users that are currently on my mysql database. I’ve fot a few users that I’m using for my flask app, and some other admin type users in here.

Now, we can tell Vault to give us a new login to our mysql database. This can be done from a vault client, or through the API of course. From the vault client, we would run:

vault read database/creds/mysqlrole

We call the vault read command against database/creds/[role configured]. You can see that when e do that we’re returned some data that includes a username and a password along with some other informative info.

We could run the same command through the API which I’ve demonstrated through the curl command.

When we look at the list of mysql users, we can see that a new user has been created that we can use for our application to login with.

Applying this Capability

OK, we’ve demonstrated that we can use Vault to create these temporary passwords for us, but how do we integrate it with something more useful? Lets go back to the Flask app we discussed at the beginning of this post. We’ll leave that app alone, but this time, we’ll deploy a docker container and attach it to the same mysql database. However, this time we’ll build the docker container to use one of our Vault generated passwords.

When I deploy the docker container, I’ll generate a new mysql login and pass it as an environment variable to the docker container to specify the Flask database connection.

response=$(Curl --header "X-Vault-Token:6244742c-0f04-YyYy-XxXx-cf0fe3d813c7" http://hashi1.hollow.local:8200/v1/database/creds/mysqlrole)
export DBPASSWORD=$(echo $response | jq -r .data.password)
export DBUSERNAME=$(echo $response | jq -r .data.username)

docker run --name hollowapp -d -p 8000:5000 --rm \ 
  -e DATABASE_URL=mysql+pymysql://hollowapp:Password123@mysql.hollow.local/hollowapp \
  hollowapp:latest

When the docker image comes up on my local machine, I’m able to login and we see the same task entry at the beginning of the post. This means that the docker image and our web server are both communicating with the same mysql database.

Well thats neat! We should remember though that this container will only work for one hour because thats how long our credentials will be available. This might seem bad for a web server, but what if we’re dynamically spinning up web servers to handle a task and then terminating the container. Then these temporary credentials would be pretty great right?

Also, in this example I’ve passed the credentials to the container through an environment variable. Perhaps it would be even more secure if the container itself, obtained the credentials when it was started. Then the container would be the only place the credentials would have been stored in memory.

Summary

This is just the beginning for Vault. There are a lot of ways you could make this solution work for you and I didn’t even discuss how Vault performs encryption or logs the sessions for an audit trail. How would you use Vault in your environment?

The post Use Hashicorp’s Vault to Dynamically Create Mysql Credentials appeared first on The IT Hollow.

]]>
https://theithollow.com/2018/06/04/use-hashicorps-vault-to-dynamically-create-mysql-credentials/feed/ 0 8881
Using Hashicorp Consul to Store Terraform State https://theithollow.com/2018/05/21/using-hashicorp-consul-to-store-terraform-state/ https://theithollow.com/2018/05/21/using-hashicorp-consul-to-store-terraform-state/#comments Mon, 21 May 2018 14:05:16 +0000 https://theithollow.com/?p=8843 Hashicorp’s Terraform product is very popular in describing your infrastructure as code. One thing that you need consider when using Terraform is where you’ll store your state files and how they’ll be locked so that two team members or build servers aren’t stepping on each other. State can be stored in Terraform Enterprise (TFE) or […]

The post Using Hashicorp Consul to Store Terraform State appeared first on The IT Hollow.

]]>
Hashicorp’s Terraform product is very popular in describing your infrastructure as code. One thing that you need consider when using Terraform is where you’ll store your state files and how they’ll be locked so that two team members or build servers aren’t stepping on each other. State can be stored in Terraform Enterprise (TFE) or with some cloud services such as S3. But if you want to store your state, within your data center, perhaps you should check out Hashicorp’s Consul product.

Consul might not have been specifically designed to house Terraform state files but does its built in capabilities lend itself well to doing just this. Hashicorp’s Consul product can be used as a service discovery product or key/value store. The product can also perform health checks on services, so the combination of these tools can be a great benefit to teams trying to build microservices architectures.

Setup a Consul Cluster

We don’t want to risk storing all of our Terraform state files in a single server, so we’ll deploy three Consul servers in a cluster. To do this, I’ve deployed three CentOS servers and opened the appropriate firewall ports (Yeah, I turned off the host firewall. It’s a lab.) Once the basic OS deployments are done we’ll need to download the latest version of Consul from Hashicorp’s website. I’ve copied the application over the the /opt directory on each of the three hosts and set the permissions so I could execute the application. Next we need to make sure the binary is in our PATH. on my CentOS machine I added /opt/ to my PATH. Remember to do this on each of the three servers.

export PATH=$PATH:/opt/

I’ve also added a second environment variable that enables the new Consul UI. This is optional, but I wanted to use the latest UI and Consul looks for the environment variable to enable this at this point. I expect this to change in the future.

export CONSUL_UI_BETA=true

Lastly, I’ve created a new directory that will house my config data for Consul.

sudo mkdir /etc/consul.d

Now we get to the business of setting up the cluster. On the first node, we’ll run the following command:

consul agent -server -bootstrap-expect=3 -data-dir=/tmp/consul -node=agent-one -bind=10.10.50.121 -enable-script-checks=true -ui -client 0.0.0.0 -config-dir=/etc/consul.d

Lets explain some of the switches that are happening in that last command.

 

-server – This tells Consul that this server will be acting as a server and not a client.

-bootstrap-expect=3 – This switch explains how many servers are expected to be part of the cluster.

-data-dir=/tmp/consul – This switch explains where Consul will store its data.

-node=agent-one – This switch identifies each of the nodes. This should be unique for each of the servers in the cluster.

-bind=10.10.50.121 – The address that should be bound to for internal cluster communications. This will be unique on each node in your cluster.

-enable-script-checks=true – We could omit this for this post, but status checks could be added later where this would be necessary.

-ui – This enables the UI

-client 0.0.0.0 – The address to which Consul will bind client interfaces, including the HTTP and DNS servers.

-config-dir – Determines where the config files will be located.

 

When you run the commands on the first node, you’ll start seeing log messages.

Repeat the commands on the other servers that should be part of the cluster. Be mindful to change the options that should be unique to the node such as bind and node. At this point the cluster should be up and running. To check this, open another terminal session and run consul members. You should see three members listed of type server.

If you used the -UI switch when you started up the nodes, you’ll also be able to navigate to http://node.domain.local:8500/ui and you’ll see the Consul UI. Notice that you’ll be directed to the Services where you can see node health. Again we see three nodes as healthy.

While we’re in the UI, take a second to click on the Key/Value tab. This is where Terraform will be storing its state files. Notice that at this point we don’t see anything listed, which makes sense because we haven’t created any pairs yet.

 

 

Terraform Build

This post won’t go into building your Terraform configurations but an important first step to using Consul as a state store. To do this we create a backend.tf file for Terraform that defines Consul as our store. Create a file like the one below:

terraform {
  backend "consul" {
    address = "consulnode.domain.local:8500"
    path    = "tf/state"
  }
}

Be sure to update the address to your consul cluster. Also, your path will be where we store the state for your terraform build. For this example I’ve used a single node for my address and my path is a generic tf/state path.

Once created run the Terraform init . command to initialize the backend. When you’re done, go ahead and run your terraform build as usual.

Once your build completes, you’ll notice in the Consul UI that there is a Key/Value listed now.

 

 

If we drill down into that value, you’ll see our JSON data for our state file.

If we need to programmatically access this state file, we can use the API to return the data. I’ve done this with a simple curl command.

Summary

Storing your Terraform state files in a local directory is good for a starting point, but once you start building lots of infrastructure or many teams are all working on the same infrastructure you need a better way to manage state. Consul provides a solution to take care of this for you as well as providing plenty of other useful capabilities. Check it out if you’re looking to extend your Terraform deployments beyond a simple deployment.

The post Using Hashicorp Consul to Store Terraform State appeared first on The IT Hollow.

]]>
https://theithollow.com/2018/05/21/using-hashicorp-consul-to-store-terraform-state/feed/ 4 8843