The IT Hollow https://theithollow.com Mon, 08 Oct 2018 14:49:17 +0000 en-US hourly 1 https://wordpress.org/?v=4.9.8 44921653 Upgrade to vRA 7.5 https://theithollow.com/2018/10/08/upgrade-to-vra-7-5/ https://theithollow.com/2018/10/08/upgrade-to-vra-7-5/#respond Mon, 08 Oct 2018 14:03:53 +0000 https://theithollow.com/?p=9212 Upgrading your vRealize Automation instance has some times been a painful exercise. But this was in the early days after VMware purchased the product from DynamicOps. It’s taken a while, but the upgrade process has improved for each and every version, in my opinion, and 7.5 is no exception. If you’re on a previous version, […]

The post Upgrade to vRA 7.5 appeared first on The IT Hollow.

]]>
Upgrading your vRealize Automation instance has some times been a painful exercise. But this was in the early days after VMware purchased the product from DynamicOps. It’s taken a while, but the upgrade process has improved for each and every version, in my opinion, and 7.5 is no exception. If you’re on a previous version, here is a quick rundown on the upgrade process from 7.4 to 7.5.

Note: As always, please read the the official upgrade documentation. It includes prerequisites and steps that should always be followed. https://docs.vmware.com/en/vRealize-Automation/7.5/vrealize-automation-7172732to75upgrading.pdf

Upgrade Prerequisites

There are a few things that should commonly be checked before these upgrades. I’ve seen prerequisites listed before for software that mention making sure you have free space available and the right hardware components, blah blah. But you really should check these. I don’t know how many times I’ve gone to do a vRA upgrade only to find out the disk sizes have changed or i used up all the free space, so do yourself a favor and check to be sure you’ve covered these.

vRealize Automation requirements:

  • 18 GB RAM
  • 4 CPUs
  • Disk1=50 GB
  • Disk3=25 GB
  • Disk4=50 GB

IaaS Servers and SQL Database must also have at least 5 GB of free space available.

Beyond these requirements, I highly recommend checking the Java version on your IaaS servers. vRA has required that the java version be upgraded between my vRA upgrades on the past few occasions and if you don’t upgrade them, it can bite you. Be sure that for 7.5 your Java version is version 8 update 161 or higher.

Also, it should make sense to you that your vRA 7.4 version is in good working condition before you upgrade to 7.5. If it isn’t, it’s unlikely that your upgrade will magically make everything work again. A couple of good things to test would be to check the services are all registered in your vRA VAMI console.

Another good thing to check is to make sure that your IaaS Management agent is communicating properly. I found out that changing my vRA root password (because I couldn’t remember it) caused my management agent to stop communicating. Check to make sure this works so that the upgrade process can not only update your vRA appliance, but then also seamlessly update your IaaS servers.

Lastly, and I can’t stress this enough, make sure that you have proper backups and snapshots. In my lab I prefer to keep my SQL database on my IaaS Server so that snapshotting this server and the vRA appliance is all that I need to do. I’ve frequently had errors during upgrades (almost always because I didn’t thoroughly review the documentation) and the snapshots instantly get me back to my starting point.

Perform the Upgrade

To run the upgrade, login to the vRA appliance’s VAMI console and go to the update tab. From there click the check for updates (assuming the default repository is set under settings) and wait until you get a notification that 7.5 is available. After that click the “Install Updates” button.

You’ll be asked a second time if you’re ready to do this upgrade. Consider this the “Do you have valid backups?” message box. Click OK when you’re ready.

You’ll get a dialog box that tells you to please wait. And it may stay this way for quite some time.

If you’d like to get more details on what’s actually happening, I highly recommend SSHing into the vRA appliance and running a tail -f on /opt/vmware/var/log/vami/updatecli.log. Eventually, you’ll see that the upgrade is finished.

The VAMI console will also show that the upgrade is complete but you’ll need to reboot the vRA appliance before it’s finished.

After the reboot, you should be able to log back into your tenant and will see the new HTML 5 interface and your “Services” menu should be gone.

Good luck on your upgrade, and thanks for reading!

 

The post Upgrade to vRA 7.5 appeared first on The IT Hollow.

]]>
https://theithollow.com/2018/10/08/upgrade-to-vra-7-5/feed/ 0 9212
AWS Session Manager https://theithollow.com/2018/10/01/aws-session-manager/ https://theithollow.com/2018/10/01/aws-session-manager/#respond Mon, 01 Oct 2018 14:05:01 +0000 https://theithollow.com/?p=9193 Amazon has released yet another Simple Systems Manager service to improve the management of EC2 instances. This time, it’s AWS Session Manager. Session Manager is a nifty little service that lets you assign permissions to users to access an instances’s shell. Now, you might be thinking, “Why would I need this? I can already add […]

The post AWS Session Manager appeared first on The IT Hollow.

]]>
Amazon has released yet another Simple Systems Manager service to improve the management of EC2 instances. This time, it’s AWS Session Manager. Session Manager is a nifty little service that lets you assign permissions to users to access an instances’s shell. Now, you might be thinking, “Why would I need this? I can already add SSH keys to my instances at boot time to access my instances.” You’d be right of course, but think of how you might use Session Manager. Instead of having to deal with adding SSH keys, and managing access/distribution of the private keys, we can manage access through AWS Identity and Access Management permissions.

Setup Session Manager

As with the other System Manager services, you’ll need the instances to have the correct permissions by assigning a Systems Manager instance profile role.

If you’ve been following along with the rest of this series, you may need to add the following policy to your EC2SystemsManagerRole. Session Manager came out much later than some of the other services we’ve talked about already. So add these additional permissions to your SystemsManagerRole before we add the instance profile to the instance. I’d also mention that my user has Full Administrator permissions but if yours doesn’t, you’ll need to add more permissions to your user to use the Session Manager service on your EC2 instances.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ssmmessages:CreateControlChannel",
                "ssmmessages:CreateDataChannel",
                "ssmmessages:OpenControlChannel",
                "ssmmessages:OpenDataChannel"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:GetEncryptionConfiguration"
            ],
            "Resource": "*"
        }
    ]
}

Once you’ve setup the appropriate instance profile permissions, you’ll need to spin up an instance. I’ve spun up one of my Linux instances that has the SSM Agent installed and assigned my EC2SystemsManagerRole attached.

You can also see that my security group that I’ve attached to my instance only has port 80 open. SSH is not required with this Session Manager service which is another benefit to your security profile.

Test A Shell Session

Once your instance has been spun up, you can look in the Systems Manager Service. Many of the EC2 Simple Systems Manager services are available from the EC2 console, but this one is not. To access it you’ll need to go to the Systems Manager service directly.

 

On the session manager screen, click the “Start a Session” button. You’ll notice that from my screenshot, the version of my SSM Agent is not current. You need version 2.3.68.0 or later for it to work with Session Manager. Luckily, my Instance profile gives enough permissions so that I can use the Run Command service to update the agent. Session Manager lets you do this directly from its own interface. Click “Update SSM Agent” button if you see this screen.

You’ll be asked if you’re sure that you want to complete this operation. Click the “update SSM Agent” button again.

Now we can go back to our Session Manager and click “Start session”. You’ll see a shell open in a new web browser window. Form there, I ran a pair of commands just to show it working. First, notice that the user that you login with on your EC2 instance is “ssm-user” and not ec2-user or root.

When you’re done with your configurations, click the “Terminate” button on the top right hand corner. NOTE: it means terminate the session and not the instance.

Just to show that the solution also works with Windows, you’ll get a PowerShell session open if using Windows.

Summary

Simple Systems Manager has a bunch of great tools to manage your EC2 instance fleet. Adding Session Manager can dramatically make your instances more easy to manage by removing the need for SSH key management, and increase your security posture by removing the need to provide port 22 access. Try it out in your environment.

 

The post AWS Session Manager appeared first on The IT Hollow.

]]>
https://theithollow.com/2018/10/01/aws-session-manager/feed/ 0 9193
Close an AWS Account Belonging to an Organization https://theithollow.com/2018/09/17/close-an-aws-account-belonging-to-an-organization/ https://theithollow.com/2018/09/17/close-an-aws-account-belonging-to-an-organization/#respond Mon, 17 Sep 2018 14:05:24 +0000 https://theithollow.com/?p=9168 Opening an AWS account is very easy to do. AWS makes it possible to create an account with an email address and a credit card. Even better, if you’re setting up a multi-account structure, you can use the API through organizations and you really only need an email address as an input. But closing an […]

The post Close an AWS Account Belonging to an Organization appeared first on The IT Hollow.

]]>
Opening an AWS account is very easy to do. AWS makes it possible to create an account with an email address and a credit card. Even better, if you’re setting up a multi-account structure, you can use the API through organizations and you really only need an email address as an input. But closing an account is slightly more difficult. While closing accounts doesn’t happen quite as often as opening new ones, it does happen. Especially if you’re trying to fail fast and have made some organizational mistakes. When you want to clean those accounts up, you’ll need to jump through a couple of small hoops to do so. This post hopes to outline how to remove an account from an AWS Organization and then close it.

Remove a Member Account from Organizations

Login to the member account that you wish to remove as the root user.

Note: You may need to reset the password if you haven’t done this already. Creating a new account from organizations does not require the password to be set.

Once logged in to the console, select the account name drop down and then select “My Account”.

Go to Payment methods and add a Credit Card. Finish filling out the Credit Card Details and contact information associated with the card.

Next, go back to the account drop down and select “My Organization”.

You can now select “Leave organization” where you’ll likely receive an error message about some steps that aren’t completed. Click the “Leave organization” button.

You’ll get a warning message asking if you’re sure you want to leave. Select the “Leave organization” link at the bottom of the warning message.

 

You’ll likely get a message preventing you from leaving the organization. Luckily the link at the bottom of this warning will show you the steps needed to finish setting up the member account and prep it for removal. Click the “Complete the account sign-up steps” link.

The first step is to verify your phone number. Enter your phone number and the captcha code and then click “Call me now”.

The screen will change and display a four digit number. You’ll also receive a call from AWS at the number you entered and will ask you to submit that number through your touch tone phone.

After you enter the code, the screen will change again stating that your identity has been verified. Click Continue.

After you verify the account, you’ll need to select a support plan. You can select the basic plan, which is free, if you plan to close the account.

When you’re done you’ll see a message stating that sign-in steps have been completed.

At this point you can select “Leave organization”.

Again you’ll get a warning. Select “Leave organization”.

You should get a message that the account was removed from your organization.

Close the Member Account

Go back to the Account drop down and select “My Account”.

Scroll to the bottom of that page and click the check box under “Close Account” stating that you understand the consequence of closing the account. Then click the “Close Account” button.

 

Verify once again that you’re ready to close the account by clicking the “Close Account” button.

You should get a message that the account we removed. You can then sign out of the account.

 

Summary

Removing an account seems easy enough to accomplish but I’ve seen strange issues from time to time with this process. Usually its something simple like a support plan or credit card hasn’t been added. Other times you’ll see odd messages about a waiting period such as this one. If you see something similar to this or are having issues with closing your account. Contact AWS Support. You should get a response within 24 hours even with the free plan.

The post Close an AWS Account Belonging to an Organization appeared first on The IT Hollow.

]]>
https://theithollow.com/2018/09/17/close-an-aws-account-belonging-to-an-organization/feed/ 0 9168
Create AWS Accounts with CloudFormation https://theithollow.com/2018/09/10/create-aws-accounts-with-cloudformation/ https://theithollow.com/2018/09/10/create-aws-accounts-with-cloudformation/#respond Mon, 10 Sep 2018 14:05:20 +0000 https://theithollow.com/?p=9151 In a previous post, we covered how to use an AWS Custom Resource in a CloudFormation template to deploy a very basic Lambda function. To expand upon this ability, lets use this knowledge to deploy something more useful than a basic Lambda function. How about we use it to create an AWS account? To my […]

The post Create AWS Accounts with CloudFormation appeared first on The IT Hollow.

]]>
In a previous post, we covered how to use an AWS Custom Resource in a CloudFormation template to deploy a very basic Lambda function. To expand upon this ability, lets use this knowledge to deploy something more useful than a basic Lambda function. How about we use it to create an AWS account? To my knowledge, the only way to create a new AWS account is to use the CLI or manually through the console. How about we use a custom resource to deploy a new account for us in our AWS Organization? Once this ability is available in a CloudFormation template, we could even publish it in the AWS Service Catalog and give our users an account vending machine capability.

Create the Lambda Function

Just as we did in the previous post, we’ll create a Lambda function, zip it up and place it into our S3 bucket. My function is Python 2.7 and can be found below.

#Import modules
import json, boto3, logging
from botocore.vendored import requests

#Define logging properties
log = logging.getLogger()
log.setLevel(logging.INFO)



#Main Lambda function to be excecuted
def lambda_handler(event, context):
    #Initialize the status of the function
    status="SUCCESS"
    responseData = {}
    client = boto3.client('organizations')


    #Read and log the input values
    acctName = event['ResourceProperties']['AccountName']
    ouName = event['ResourceProperties']['OUName']
    emailAddress = event['ResourceProperties']['Email']
    log.info("Account name is: " + acctName)
    log.info("Organizational Unit name is: " + ouName)
    log.info("Email Address is: " + emailAddress)

    #create a new Organizational Unit
    orgResponse = client.create_organizational_unit(
        ParentId="ou-3hvv-jqwq89r0", #My Parent OU. Change for your environment
        Name=ouName
    )

    log.info(orgResponse['OrganizationalUnit']['Id'])
    OUID=str(orgResponse['OrganizationalUnit']['Id'])

    #Create a new Account in the OU Just Created
    acctResponse = client.create_account(
        Email=emailAddress,
        AccountName=acctName 
    )

    #Check Account Status
    acctStatusID = acctResponse['CreateAccountStatus']['Id']
    log.info(acctStatusID)

    while True:
        createStatus = client.describe_create_account_status(
            CreateAccountRequestId=acctStatusID
        )
        log.info(createStatus['CreateAccountStatus']['State'])        
        if str(createStatus['CreateAccountStatus']['State']) != 'IN_PROGRESS':
            newAccountId = str(createStatus['CreateAccountStatus']['AccountId'])
            break

    #Move Account to new OU
    moveResponse = client.move_account(
        AccountId=newAccountId,
        SourceParentId='r-3hvv', #My root OU. Change for your environment
        DestinationParentId=OUID
    )

    #Set Return Data
    responseData = {"Message" : newAccountId} #If you need to return data use this json object

    #return the response back to the S3 URL to notify CloudFormation about the code being run
    response=respond(event,context,status,responseData,None)

    #Function returns the response from the S3 URL
    return {
        "Response" :response
    }

def respond(event, context, responseStatus, responseData, physicalResourceId):
    #Build response payload required by CloudFormation
    responseBody = {}
    responseBody['Status'] = responseStatus
    responseBody['Reason'] = 'Details in: ' + context.log_stream_name
    responseBody['PhysicalResourceId'] = context.log_stream_name
    responseBody['StackId'] = event['StackId']
    responseBody['RequestId'] = event['RequestId']
    responseBody['LogicalResourceId'] = event['LogicalResourceId']
    responseBody['Data'] = responseData

    #Convert json object to string and log it
    json_responseBody = json.dumps(responseBody)
    log.info("Response body: " + str(json_responseBody))

    #Set response URL
    responseUrl = event['ResponseURL']

    #Set headers for preparation for a PUT
    headers = {
    'content-type' : '',
    'content-length' : str(len(json_responseBody))
    }

    #Return the response to the signed S3 URL
    try:
        response = requests.put(responseUrl,
        data=json_responseBody,
        headers=headers)
        log.info("Status code: " + str(response.reason))
        status="SUCCESS"
        return status
    #Defind what happens if the PUT operation fails
    except Exception as e:
        log.error("send(..) failed executing requests.put(..): " + str(e))
        status="FAILED"
        return status

As before, lets break down a few of the relevant sections of the code so you can see whats happening. To begin, lets look at the main lambda_handler. First we’ll initialize some of our variables and set our boto3 client to organizations so that we can create our accounts. After this, we’re going to set some variables in our Lambda function that will be passed in from our CloudFormation template (shown later in this post). After we set our variables, we’ll log them so that we can see what CloudFormation actually passed to our function.

#Main Lambda function to be excecuted
def lambda_handler(event, context):
    #Initialize the status of the function
    status="SUCCESS"
    responseData = {}
    client = boto3.client('organizations')


    #Read and log the input values
    acctName = event['ResourceProperties']['AccountName']
    ouName = event['ResourceProperties']['OUName']
    emailAddress = event['ResourceProperties']['Email']
    log.info("Account name is: " + acctName)
    log.info("Organizational Unit name is: " + ouName)
    log.info("Email Address is: " + emailAddress)

Next, we’ll use boto3 to create an Organizational Unit on the fly. We’ll pass in the name of this OU from our CloudFormation template. To do this we’ll use the create_organizational_unit method and we’ll need to pass in the parent OU and the name of our new OU. When we’re done, we’ll log the ID of this OU and I’m setting a variable with the ID of this OU as well for later on in the function.

#create a new Organizational Unit
    orgResponse = client.create_organizational_unit(
        ParentId="ou-3hvv-jqwq89r0", #My Parent OU. Change for your environment
        Name=ouName
    )

    log.info(orgResponse['OrganizationalUnit']['Id'])
    OUID=str(orgResponse['OrganizationalUnit']['Id'])

Now that we’ve created an OU, lets create the account. Again, we’ll use boto3 to call the create_account method. We’ll pass in an email address to be used for the new account and an account name. Again, when this is done, we’ll log the response which is the account status. After this, we’ll initiate a loop to check on the status of the account while it’s being created. Account creation isn’t an immediate thing, so its good to check on it until its either Successful or Failed. The loop checks the status, logs it and waits until its no longer IN_PROGRESS.

#Create a new Account in the OU Just Created
    acctResponse = client.create_account(
        Email=emailAddress,
        AccountName=acctName 
    )

    #Check Account Status
    acctStatusID = acctResponse['CreateAccountStatus']['Id']
    log.info(acctStatusID)

    while True:
        createStatus = client.describe_create_account_status(
            CreateAccountRequestId=acctStatusID
        )
        log.info(createStatus['CreateAccountStatus']['State'])        
        if str(createStatus['CreateAccountStatus']['State']) != 'IN_PROGRESS':
            newAccountId = str(createStatus['CreateAccountStatus']['AccountId'])
            break

With any luck, our account has been created and we’ve got one more thing to do. Lets move the new account, into the new OU we created. Again, we’ll use boto3, but this time with the move_account method. I’m passing in the new AccountId we stored from our new create_account method, and the OUID we stored from our create_organizational_unit. I’m also specifying my root OU which will be different in your case. Fill it in, or do a search to find it in the Lambda function.

#Move Account to new OU
    moveResponse = client.move_account(
        AccountId=newAccountId,
        SourceParentId='r-3hvv', #My root OU. Change for your environment
        DestinationParentId=OUID
    )

The account stuff is done, now we’re just setting some return data to be sent back to CloudFormation. The info I want sent back to CFn as an output is the accountID of our new account we created. After which we’ll send the data back to our signed S3 URL as we explained in the previous post about Custom Resorces.

#Set Return Data
    responseData = {"Message" : newAccountId} #If you need to return data use this json object

    #return the response back to the S3 URL to notify CloudFormation about the code being run
    response=respond(event,context,status,responseData,None)

    #Function returns the response from the S3 URL
    return {
        "Response" :response
    }

 

Create the CloudFormation Template

If you’re following from the previous post, the only changes to the CloudFormation template are the variables being passed back and forth. For that reason we won’t go into much detail here, but the full template I used is found below.

---
AWSTemplateFormatVersion: '2010-09-09'
Description: Account Creation Stack
Parameters:
  ModuleName: #Name of the Lambda Module
    Description: The name of the Python file
    Type: String
    Default: "create-account"
  S3Bucket: #S3 bucket in which to retrieve the python script with the Lambda handler
    Description: The name of the bucket that contains your packaged source
    Type: String
    Default: "hollow-acct"
  S3Key: #Name of the zip file
    Description: The name of the ZIP package
    Type: String
    Default: "create-account.zip"
  AccountName: #Account Name
    Description: Account Name To Be Created
    Type: String
    Default: "HollowTest1"
  OUName: #Organizational Unit Name
    Description: Organizational Unit Name To Be Created
    Type: String
    Default: "HollowTest1"
  Email: #Email Address
    Description: Email Address used for the Account
    Type: String    
Resources:
  CreateAccount: #Custom Resource
    Type: Custom::CreateAccount
    Properties:
      ServiceToken:
        Fn::GetAtt:
        - TestFunction #Reference to Function to be run
        - Arn #ARN of the function to be run
      AccountName:
        Ref: AccountName
      OUName:
        Ref: OUName
      Email:
        Ref: Email
  TestFunction: #Lambda Function
    Type: AWS::Lambda::Function
    Properties:
      Code:
        S3Bucket:
          Ref: S3Bucket
        S3Key:
          Ref: S3Key
      Handler:
        Fn::Join:
        - ''
        - - Ref: ModuleName
          - ".lambda_handler"
      Role:
        Fn::GetAtt:
        - LambdaExecutionRole
        - Arn
      Runtime: python2.7
      Timeout: '30'
  LambdaExecutionRole: #IAM Role for Custom Resource
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: '2012-10-17'
        Statement:
        - Effect: Allow
          Principal:
            Service:
            - lambda.amazonaws.com
          Action:
          - sts:AssumeRole
      Path: "/"
      Policies:
      - PolicyName: root
        PolicyDocument:
          Version: '2012-10-17'
          Statement:
          - Effect: Allow
            Action:
            - logs:CreateLogGroup
            - logs:CreateLogStream
            - logs:PutLogEvents
            Resource: arn:aws:logs:*:*:*
      - PolicyName: Acct
        PolicyDocument:
          Version: '2012-10-17'
          Statement:
          - Effect: Allow
            Action:
            - organizations:*
            Resource: "*"
Outputs: #Return output from the Lambda Function Run
  Message:
    Description: Message returned from Lambda
    Value:
      Fn::GetAtt:
      - CreateAccount #Output from the Custom Resource
      - Message #Return property

 

Results

Now that the coding is done, we can deploy the CloudFormation template. I’ve chosen to do this through the command line but you could do it through the console as well. My command line execution is as follows:

aws cloudformation create-stack --stack-name theITHollowCreateAccount1 --template-body file://create-account-CFn.yml --capabilities CAPABILITY_IAM --parameters ParameterKey=AccountName,ParameterValue=theITHollowAcct1 ParameterKey=OUName,ParameterValue=theITHollowOU1 ParameterKey=Email,ParameterValue=aws-temp4@theithollow.com ParameterKey=ModuleName,ParameterValue=create-account ParameterKey=S3Bucket,ParameterValue=hollow-acct ParameterKey=S3Key,ParameterValue=create-account.zip

 

 

After a minute, you’ll see that the CFn stack has been deployed successfully, and that the output for the stack is the account number for the new AWS account.

If we open up the AWS Organizations console, we’ll see that the new account was created and the account number matches our output from the screenshot above.

As we look through the organizational units, we’ll see that a new OU was created and that our new account lives within that OU.

Summary

I hope that this post has shown you what kind of cool stuff you can do with a CloudFormation custom resource. Now think how neat this might be to put in AWS Service Catalog to deploy new accounts on demand. I will admit that this method does have some drawbacks, such as not being able to delete the stack and have the account deleted, but it is what it is.

 

 

 

 

 

The post Create AWS Accounts with CloudFormation appeared first on The IT Hollow.

]]>
https://theithollow.com/2018/09/10/create-aws-accounts-with-cloudformation/feed/ 0 9151
AWS Custom Resources https://theithollow.com/2018/09/04/aws-custom-resources/ https://theithollow.com/2018/09/04/aws-custom-resources/#comments Tue, 04 Sep 2018 14:00:04 +0000 https://theithollow.com/?p=9131 We love to use AWS CloudFormation to deploy our environments. Its like configuration management for our AWS infrastructure in the sense that we write a desired state as code and apply it to our environment. But sometimes, there are tasks that we want to complete that aren’t part of CloudFormation. For instance, what if we […]

The post AWS Custom Resources appeared first on The IT Hollow.

]]>
We love to use AWS CloudFormation to deploy our environments. Its like configuration management for our AWS infrastructure in the sense that we write a desired state as code and apply it to our environment. But sometimes, there are tasks that we want to complete that aren’t part of CloudFormation. For instance, what if we wanted to use CloudFormation to deploy a new account which needs to be done through the CLI, or if we need to return some information to our CloudFormation template before deploying it? Luckily for us we can use a Custom Resource to achieve our goals. This post shows how you can use CloudFormation with a Custom Resource to execute a very basic Lambda function as part of a deployment.

Solution Overview

To demonstrate our Custom Resource, we’ll need a Lambda function that we can call. CloudFormation will deploy this function from a Zip file and after deployed, will execute this function and return the outputs to our CloudFormation template. The diagram below demonstrates the process of retrieving this zip file form an existing S3 bucket, deploying it, executing it and having the Lambda function return data to CloudFormation.

The CloudFormation Template

First, lets take a look at the CloudFormation template that we’ll be using to deploy our resources.

---
AWSTemplateFormatVersion: '2010-09-09'
Description: Example of a Lambda Custom Resource that returns a message
Parameters:
  ModuleName: #Name of the Lambda Module
    Description: The name of the Python file
    Type: String
    Default: helloworld
  S3Bucket: #S3 bucket in which to retrieve the python script with the Lambda handler
    Description: The name of the bucket that contains your packaged source
    Type: String
    Default: hollow-lambda1
  S3Key: #Name of the zip file
    Description: The name of the ZIP package
    Type: String
    Default: helloworld.zip
  Message: #Message input for you to enter
    Description: The message to display
    Type: String
    Default: Test
Resources:
  HelloWorld: #Custom Resource
    Type: Custom::HelloWorld
    Properties:
      ServiceToken:
        Fn::GetAtt:
        - TestFunction #Reference to Function to be run
        - Arn #ARN of the function to be run
      Input1:
        Ref: Message
  TestFunction: #Lambda Function
    Type: AWS::Lambda::Function
    Properties:
      Code:
        S3Bucket:
          Ref: S3Bucket
        S3Key:
          Ref: S3Key
      Handler:
        Fn::Join:
        - ''
        - - Ref: ModuleName
          - ".lambda_handler"
      Role:
        Fn::GetAtt:
        - LambdaExecutionRole
        - Arn
      Runtime: python2.7
      Timeout: '30'
  LambdaExecutionRole: #IAM Role for Custom Resource
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: '2012-10-17'
        Statement:
        - Effect: Allow
          Principal:
            Service:
            - lambda.amazonaws.com
          Action:
          - sts:AssumeRole
      Path: "/"
      Policies:
      - PolicyName: root
        PolicyDocument:
          Version: '2012-10-17'
          Statement:
          - Effect: Allow
            Action:
            - logs:CreateLogGroup
            - logs:CreateLogStream
            - logs:PutLogEvents
            Resource: arn:aws:logs:*:*:*
Outputs: #Return output from the Lambda Function Run
  Message:
    Description: Message returned from Lambda
    Value:
      Fn::GetAtt:
      - HelloWorld #Output from the HelloWorld Custom Resource
      - Message #Return property

In the parameters section you can see we’re looking for the S3 bucket with our module in it, the name of the module and a generic input for the CloudFormation template to pass to Lambda as a string.

Parameters:
  ModuleName: #Name of the Lambda Module
    Description: The name of the Python file
    Type: String
    Default: helloworld
  S3Bucket: #S3 bucket in which to retrieve the python script with the Lambda handler
    Description: The name of the bucket that contains your packaged source
    Type: String
    Default: hollow-lambda1
  S3Key: #Name of the zip file
    Description: The name of the ZIP package
    Type: String
    Default: helloworld.zip
  Message: #Message input for you to enter
    Description: The message to display
    Type: String
    Default: Test

In the resources section we have a HelloWorld object which is our custom resource of type Custom::DESCRIPTIONHERE. We need to pass a ServiceToken along, which tells the stack which Custom Resource to be executed. We’re also adding an input which will be passed to Lambda named “Input1” and we’ll reference the parameter seen earlier.

HelloWorld: #Custom Resource
    Type: Custom::HelloWorld
    Properties:
      ServiceToken:
        Fn::GetAtt:
        - TestFunction #Reference to Function to be run
        - Arn #ARN of the function to be run
      Input1:
        Ref: Message

Below this, is the Lambda function deployment. This piece of the resources section of the template shows where the Lambda module comes from, the runtime, timeout and which role will have permissions be used for it.

TestFunction: #Lambda Function
    Type: AWS::Lambda::Function
    Properties:
      Code:
        S3Bucket:
          Ref: S3Bucket
        S3Key:
          Ref: S3Key
      Handler:
        Fn::Join:
        - ''
        - - Ref: ModuleName
          - ".lambda_handler"
      Role:
        Fn::GetAtt:
        - LambdaExecutionRole
        - Arn
      Runtime: python2.7
      Timeout: '30'

Next, there is a section for setting up permissions for the Lambda function to write to CloudWatch. Depending on your environment, you may need to provide access to other resources. For example if your function reads EC2 data, then you’d need to ensure it had the appropriate permissions to read those properties.

LambdaExecutionRole: #IAM Role for Custom Resource
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: '2012-10-17'
        Statement:
        - Effect: Allow
          Principal:
            Service:
            - lambda.amazonaws.com
          Action:
          - sts:AssumeRole
      Path: "/"
      Policies:
      - PolicyName: root
        PolicyDocument:
          Version: '2012-10-17'
          Statement:
          - Effect: Allow
            Action:
            - logs:CreateLogGroup
            - logs:CreateLogStream
            - logs:PutLogEvents
            Resource: arn:aws:logs:*:*:*

We won’t look at the next piece until our Lambda function finishes running, but we’re going to get some return information from the function and print it as a CloudFormation output.

Outputs: #Return output from the Lambda Function Run
  Message:
    Description: Message returned from Lambda
    Value:
      Fn::GetAtt:
      - HelloWorld #Output from the HelloWorld Custom Resource
      - Message #Return property

 

Lambda Function

Let’s look at the Lambda Function we’ll be using for this example. This is a python 2.7 function that takes a basic string input from CloudFormation, concatenates it with another string and returns it. Nothing too crazy here for the example, but there are some important pieces that must be in your Lambda function so that CloudFormation knows that the function is done running, and if it executed correctly.

#Import modules
import json, boto3, logging
from botocore.vendored import requests

#Define logging properties
log = logging.getLogger()
log.setLevel(logging.INFO)



#Main Lambda function to be excecuted
def lambda_handler(event, context):
    #Initialize the status of the function
    status="SUCCESS"
    responseData = {}


    #Read and log the input value named "Input1"
    inputValue = event['ResourceProperties']['Input1']
    log.info("Input value is:" + inputValue)

    #transform the input into a new value as an exmaple operation
    data = inputValue + "Thanks to AWS Lambda"
    responseData = {"Message" : data} #If you need to return data use this json object

    #return the response back to the S3 URL to notify CloudFormation about the code being run
    response=respond(event,context,status,responseData,None)

    #Function returns the response from the S3 URL
    return {
        "Response" :response
    }

def respond(event, context, responseStatus, responseData, physicalResourceId):
    #Build response payload required by CloudFormation
    responseBody = {}
    responseBody['Status'] = responseStatus
    responseBody['Reason'] = 'Details in: ' + context.log_stream_name
    responseBody['PhysicalResourceId'] = context.log_stream_name
    responseBody['StackId'] = event['StackId']
    responseBody['RequestId'] = event['RequestId']
    responseBody['LogicalResourceId'] = event['LogicalResourceId']
    responseBody['Data'] = responseData

    #Convert json object to string and log it
    json_responseBody = json.dumps(responseBody)
    log.info("Response body: " + str(json_responseBody))

    #Set response URL
    responseUrl = event['ResponseURL']

    #Set headers for preparation for a PUT
    headers = {
    'content-type' : '',
    'content-length' : str(len(json_responseBody))
    }

    #Return the response to the signed S3 URL
    try:
        response = requests.put(responseUrl,
        data=json_responseBody,
        headers=headers)
        log.info("Status code: " + str(response.reason))
        status="SUCCESS"
        return status
    #Defind what happens if the PUT operation fails
    except Exception as e:
        log.error("send(..) failed executing requests.put(..): " + str(e))
        status="FAILED"
        return status

 

Lets look at the main function that will be executed. First we’ll initialize some of our variables. Next, we want to retrieve our input parameter (named Input1 and passed from CloudFormation), and then log it. After this there is a simple operation to concatenate the input with another string just to do something simple in our function. The next step is to provide some return data that will end up being our CloudFormation output. This is a JSON object so if you don’t need to return any custom info to CloudFormation, use an empty JSON object {}.

Below this, we’re calling a respond function (also located in our Lambda script which will retrieve info to send back to CloudFormation about the state of the Lambda script run. After this we return our data from the function.

def lambda_handler(event, context):
    #Initialize the status of the function
    status="SUCCESS"
    responseData = {}


    #Read and log the input value named "Input1"
    inputValue = event['ResourceProperties']['Input1']
    log.info("Input value is:" + inputValue)

    #transform the input into a new value as an exmaple operation
    data = inputValue + "Thanks to AWS Lambda"
    responseData = {"Message" : data} #If you need to return data use this json object

    #return the response back to the S3 URL to notify CloudFormation about the code being run
    response=respond(event,context,status,responseData,None)

    #Function returns the response from the S3 URL
    return {
        "Response" :response
    }

Lets look closer at the respond function. When we’re executing this function we need to send certain items back to CloudFormation so that the stack knows if it worked or not. Specifically it must return a response of SUCCESS or FAILED to a pre-signed URL. There are a list of response objects, specifically:

  • Status (Required)
  • Reason (Required if FAILED)
  • PhysicalResourceId (Required)
  • StackId (Required)
  • RequestId (Required)
  • LogicalResourceId (Required)
  • NoEcho
  • Data

The first part of this function builds the JSON object so that we can send it back to CloudFormation. We are then converting it to a string and logging the data for later reference. We set our responseURL which is passed to us from CloudFormation in the event parameter. After that we set the headers and then we try to do a PUT REST call with our return data. To do all of this, we had to import certain modules for our function which are seen in the full script, but none of these need to be provided in your zip file. It should be noted that this can also be done if you add your Lambda function in-line within your CFn template. if you use that method, there is a “cfn-response” module that can be called which eliminates the need to use the requests module.

def respond(event, context, responseStatus, responseData, physicalResourceId):
    #Build response payload required by CloudFormation
    responseBody = {}
    responseBody['Status'] = responseStatus
    responseBody['Reason'] = 'Details in: ' + context.log_stream_name
    responseBody['PhysicalResourceId'] = context.log_stream_name
    responseBody['StackId'] = event['StackId']
    responseBody['RequestId'] = event['RequestId']
    responseBody['LogicalResourceId'] = event['LogicalResourceId']
    responseBody['Data'] = responseData

    #Convert json object to string and log it
    json_responseBody = json.dumps(responseBody)
    log.info("Response body: " + str(json_responseBody))

    #Set response URL
    responseUrl = event['ResponseURL']

    #Set headers for preparation for a PUT
    headers = {
    'content-type' : '',
    'content-length' : str(len(json_responseBody))
    }

    #Return the response to the signed S3 URL
    try:
        response = requests.put(responseUrl,
        data=json_responseBody,
        headers=headers)
        log.info("Status code: " + str(response.reason))
        status="SUCCESS"
        return status
    #Defind what happens if the PUT operation fails
    except Exception as e:
        log.error("send(..) failed executing requests.put(..): " + str(e))
        status="FAILED"
        return status

 

See It In Action

Just so we can show some screenshots of the process, here is the input for my CloudFormation template as I’m deploying it through the AWS Console. See that I’ve got an input message of “Test Message” and I’m specifying information about my Lambda Function’s location.

 

You can also see my Lambda function neatly zipped up in my S3 bucket below.

Once the Lambda function has been deployed, we can see it in the Lambda Functions console.

If we look at the CloudWatch Logs for our function, we can see the data being returned to CloudFormation.

Lastly, we can see that in the CloudFormation template that we deployed, it has finished the creation and in the outputs tab, we can see the message that was returned to the stack. This output could be used for other stacks or purely informational.

Summary

Custom Resources might not be necessary very often, but they can let you do virtually anything you want within AWS. Maybe they execute a Lambda function to gather data, or maybe they trigger a Step Function that has tons of logic built into it to do something else magical. The world is your oyster now, what will you build with your new knowledge?

The post AWS Custom Resources appeared first on The IT Hollow.

]]>
https://theithollow.com/2018/09/04/aws-custom-resources/feed/ 1 9131
Add AWS Web Application Firewall to Protect your Apps https://theithollow.com/2018/08/20/add-aws-web-application-firewall-to-protect-your-apps/ https://theithollow.com/2018/08/20/add-aws-web-application-firewall-to-protect-your-apps/#respond Mon, 20 Aug 2018 14:02:31 +0000 https://theithollow.com/?p=9100 Some things change when you move to the cloud, but other things are very much the same. Like protecting your resources from outside threats. There are always no-gooders out there trying to steal data, or cause mayhem like in those Allstate commercials. Our first defense should be well written applications, requiring authentication, etc and with […]

The post Add AWS Web Application Firewall to Protect your Apps appeared first on The IT Hollow.

]]>
Some things change when you move to the cloud, but other things are very much the same. Like protecting your resources from outside threats. There are always no-gooders out there trying to steal data, or cause mayhem like in those Allstate commercials. Our first defense should be well written applications, requiring authentication, etc and with AWS we make sure we’re setting up security groups to limit our access to those resources. How about an extra level of protection from a Web Application Firewall. AWS WAF allows us to leverage some extra protections at the edge to protect us from those bad guys/girls.

Background on WAF

The AWS Web Application Firewall (WAF) allows us to create custom rules to protect us from things like cross-site scripting, SQL injection, or just blocking traffic from certain geographies. If your site isn’t ready for GDPR maybe you block Europe from accessing your site altogether. Of course WAF can also do things like block specific IP addresses that have been identified as bots, etc but we expect all firewalls to be able to do this. WAF is billed at $5 per web ACL per month and another $1 per rule per ACL per month for the configuration. There are additional usage charges added based on the requests at $0.60 per million web requests.

The AWS WAF can be used with an AWS Application Load Balancer or a CloudFront distribution. If your application is hosted on-prem, you could still leverage the AWS WAF by integrating a CloudFront distribution with your application.

There are several parts to deploying a WAF for your application.

  • Conditions – You’ll build a condition to identify the type of traffic or web call that is being made. This could be a source IP Address, a regular expression, a SQL filter, etc. The job here is to identify the types of requests that an action will be take on.
  • Rules – Rules will identify if you plan to allow or block traffic that is identified by a condition, or a catch-all. For example, a rule might block IP Addresses identified by Condition 1, block traffic from a specific geopgraphy identified by Condition 2, and allow any other traffic by default.
  • Web ACL – A group of rules can be added to a Web ACL and the Web ACL is attached to a resource such as an Application Load Balancer or CloudFront Distribution.

Setup Through the Console

The examples below will use a very basic website behind an AWS application load balancer through the AWS console. To begin, navigate to the AWS WAF and Shield services. A familiar getting started screen will show up where you can add your WAF by clicking the “Go to AWS WAF” button.

When the WAF screen opens, click the “Configure web ACL” button which will start the process of walking us through creating conditions and rules as well as the Web ACL.

The first screen gives you and idea of what will be created and how you might set it up. This screen is informational so read it and when you’re ready, click the “Next” button.

Lets create a Web ACL. I’ve named mine HollowACL and there is a CloudWatch metric that will be created as well that shows the statistics for this ACL in the CloudWatch console. Note: it may be useful to keep these names the same for tracking purposes.

Select the region that this will be available in. If you’re using CloudFront, the region should be “Global”, if you’re using an ALB, select the region your ALB is located. After you select the region, you should be able to select the ALB to associate the WAF ACL with and then click the “Next” button.

Now it’s time to create the conditions. I’m keeping this simple and will geo-block requests coming from the United States for giggles and grins. Under the Geo match conditions type click the “Create condition” button to create a new condition. Depending on your own requirements, you may have to choose a different condition type which ultimately would ask for different things as part of the rule.

Give the condition a name and again select a region. Since I selected a geo match condition I’ll need to identify which country to block. When done, be sure to click the “Add location” button to add it to the condition.

Now that our condition(s) are created, lets move on to rules. Click the “Create rule” button.

When you create the rule, give it a name and again a CloudWatch metric so we can review the activity later. Select either “regular” or “rate-based” rule depending on if this should always be active. Note: Rate-based rules would be good for brute force attacks where the first couple of times its allowed and then a block rule is triggered on too many attempts.

Under the conditions, select does or does not for a matching condition and then the type of condition (in this case its a geo rule) and then which condition of that type you’re matching. Add further conditions if needed.

 

We’re taken back to the web ACL screen where we will select whether traffic that matches that rule should be allowed, blocked or counted. Counted is used to monitor traffic, but not take actions on it. You should also select a default action of allow or deny. Click the “Review and create” button.

Review the settings and then click the “Confirm and create” button.

 

The Results

First things first, did it work? Lets try a request to access the web application from a US location (my desktop). We’ll notice that we get a 403 error meaning that we found a live service, but were denied access.

If we look back at the WAF Console, we can select our ACL and see a graph of the metrics we’re looking for. We can also see some samples of the requests that match the rule.

 

By looking at the CloudWatch portal, we can see even more details and we can create alarms (and subsequently take action on those alarms) if we see fit to do so.

 

WAF Through Code

As with most things AWS, you can deploy your WAF conditions, rules, and ACLs through CloudFormation. Below is an example of a simple block IP rule deployed through CloudFormation. The code below should get you started but only includes a sample load balancer, ACL, an IPSet, a rule and an association. You should be aware that if you’re working with rules for a load balancer they are denoted by a type of “AWS::WAFRegional::SOMETHING” whereas WAF objects for CloudFront are just denoted by a type of “AWS::WAF::SOMETHING”

"Resources": {

      "HollowWebLB1": {
          "Type": "AWS::ElasticLoadBalancingV2::LoadBalancer",
          "Properties" : {
              "Name" : "HollowWebLB1",
              "Scheme" : "internet-facing",
              "SecurityGroups" : [{"Ref": "InstanceSecurityGroups"}],
              "Subnets" : [{"Ref":"Web1Subnet"}, {"Ref":"Web2Subnet"}],
              "Type" : "application"
          }
      },

      "HollowACL": {
        "Type": "AWS::WAFRegional::WebACL",
        "Properties": {
          "Name": "HollowACL",
          "DefaultAction": {
            "Type": "ALLOW"
          },
          "MetricName" : "HollowACL",
          "Rules": [
            {
              "Action" : {
                "Type" : "BLOCK"
              },
              "Priority" : 1,
              "RuleId" : { "Ref": "HollowRule" }
            }
          ]
        }
      },

      "WAFBlacklistSet": {
        "Type": "AWS::WAFRegional::IPSet",
        "Properties": {
          "Name": {
            "Fn::Join": [" - ", [{
              "Ref": "AWS::StackName"
            }, "Blacklist Set"]]
          },
          "IPSetDescriptors": [
            {
              "Type": "IPV4",
              "Value" : { "Ref" : "MyIPSetBlacklist" }
            }
          ]
        }
      },

      "HollowRule": {
        "Type" : "AWS::WAFRegional::Rule",
        "Properties": {
          "Name" : "HollowRule",
          "MetricName" : "MyIPRule",
          "Predicates" : [
            {
              "DataId" : { "Ref" : "WAFBlacklistSet" },
              "Negated" : false,
              "Type" : "IPMatch"
            }
          ]
        }
      },

      "ACLAssociation": {
        "Type": "AWS::WAFRegional::WebACLAssociation",
        "Properties": {
          "ResourceArn": {"Ref": "HollowWebLB1"},
          "WebACLId": {"Ref" : "HollowACL"}
        }
      }



    }

Summary

The AWS WAF product should likely be part of your perimeter security strategy somehow. Applications build for the cloud should be behind a load balancer and in different AZs for availability purposes and if you’re using native AWS services that means an ALB probably. Why not add some additional protection by adding a WAF as well. If you aren’t sure about how to create the rules you need, checkout the marketplace where there are pre-defined rules for many applications. Try them yourself and see what you come up with.

The post Add AWS Web Application Firewall to Protect your Apps appeared first on The IT Hollow.

]]>
https://theithollow.com/2018/08/20/add-aws-web-application-firewall-to-protect-your-apps/feed/ 0 9100
Using AWS CodeDeploy to Push New Versions of your Application https://theithollow.com/2018/08/06/using-aws-codedeploy-to-push-new-versions-of-your-application/ https://theithollow.com/2018/08/06/using-aws-codedeploy-to-push-new-versions-of-your-application/#respond Mon, 06 Aug 2018 14:04:33 +0000 https://theithollow.com/?p=9053 Getting new code onto our servers can be done in a myriad of ways these days. Configuration management tools can pull down new code, pipelines can run scripts across our fleets, or we could run around with a USB stick for the rest of our lives. With container based apps, serverless functions, and immutable infrastructure, […]

The post Using AWS CodeDeploy to Push New Versions of your Application appeared first on The IT Hollow.

]]>
Getting new code onto our servers can be done in a myriad of ways these days. Configuration management tools can pull down new code, pipelines can run scripts across our fleets, or we could run around with a USB stick for the rest of our lives. With container based apps, serverless functions, and immutable infrastructure, we’ve changed this conversation quite a bit as well. But what about a plain old server that needs a new version of code deployed on it? AWS CodeDeploy can help us to manage our software versions and rollbacks so that we have a consistent method to update our apps across multiple instances. This post will demonstrate how to get started with AWS CodeDeploy so that you can manage the deployment of new versions of your apps.

Setup IAM Roles

Before we start, I’ll assume that you’ve got a user account with administrator permissions so that you can deploy the necessary roles, servers and tools. After this, we need to start by setting up some permissions for CodeDeploy. First, we need to create a service role for CodeDeploy so that it can read tags applied to instances and take some actions for us. Go to the IAM console and click on the Roles tab. Then click “Create role”.

Choose AWS service for the trusted entity and then choose CodeDeploy.

After this, select the use case. For this post we’re deploying code on EC2 instances and not Lambda code, so select the “CodeDeploy” use case.

 

On the next screen choose the AWSCodeDeployRole policy.

On the last screen give it a descriptive name.

Now that we have a role, we need to add a new policy. While still in the IAM console, choose the policies tab and then click the “Create policy” button.

In the create policy screen, choose the JSON tab and enter the JSON seen below. This policy allows the assumed role to read from all S3 buckets. We’ll be attaching this policy to an instance profile and eventually our servers.

On the last screen, enter a name for the policy and then click the “Create policy” button.

 

 

Let’s create a second role now for this new policy.

This time select the “EC2” service so that our servers can access the S3 buckets.

 

On the attach permissions policies screen, select the policy we just created.

On the last screen give the role a name and click the “Create role” button.

 

Deploy Application Servers

Now that we’ve got that pesky permissions thing taken care of, it’s time to deploy our servers. You can deploy some EC2 instances any way you want, (I prefer CloudFormation personally) but for this post I’ll show you the important pieces in the AWS Console. Be sure to deploy your resources with the IAM role we created in the section prior to this. This instance profile gives the EC2 instance permissions to read from your S3 bucket where your code is stored.

As part of your server build, you’ll need to install the CodeDeploy agent. You can do this manually, or a better way might be to add the code below to the EC2 UserData field during deployment. NOTE: the [bucket-name] field comes from a pre-set list of buckets from AWS and is based on your region. See the list from here: https://docs.aws.amazon.com/codedeploy/latest/userguide/resource-kit.html#resource-kit-bucket-names

#!/bin/bash
yum -y update
yum install -y ruby
cd /home/ec2-user
curl -O https://bucket-name.s3.amazonaws.com/latest/install
chmod +x ./install
./install auto

 

Also when you’re deploying your servers, you’ll want to have a tag that can be referenced by the CodeDeploy service. This tag will be useful to identify which servers should receive the updates that we’ll push later. For this example, I’m using a tag named “App” and a value of “HollowWeb”. I’m deploying a pair of servers in different AZs behind a load balancer to ensure that  I’ve got excellent availability. Each of the servers will have this tag.

Once, the servers are deployed, you’ll want to deploy an app to make sure its all up and running correctly. NOTE: you could deploy the app for the first time through CodeDeploy if you’d like. I’m purposefully not doing this so that I can show how updates work and the first deployment isn’t as interesting so I’ve omitted it to keep this blog post to a reasonable length.

You can see my application is deployed by hitting the instance from a web browser. (Try not to be too impressed by version 0 here)

You can see from the EC2 console that I’ve created a target group for my load balancer and my two EC2 instances are associated with it and in a health state.

 

Create the App in CodeDeploy

Now it’s finally time to get to the meat of this post and talk through CodeDeploy. The first thing we’ll do is create an application within the CodeDeploy console. When you first open the CodeDeploy console from the AWS portal, you’ll probably see the familiar getting started page. Click that get started button and let’s get down to business.

You can do a sample deployment if you want to, but that’d hide some of the good bits, so we’ll choose a custom deployment instead. Click the “Skip Walkthrough” button.

Give your application a name that you’ll recognize. Then in the dropdown, select EC2/On-premises. This tells CodeDeploy that we’ll be updating servers, but we could also use this for Lambda functions if we wished. Then give the deployment group a name. This field will identify the group of servers that are part of the deployment.

Next up, you’ll select your deployment type. I’ve chosen an in-place deployment meaning that my servers will stay put, but my code will be copied on top of the existing server. Blue/green deployments are also available and would redeploy new instances during the deployment.

Next, we configure our environment. I’ve selected the Amazon EC2 instances tab and then entered that key/value pair from earlier in this post that identifies my apps. Remember the “App:HollowWeb” tag from earlier? Once you enter this, the console should show you the instances associated with this tag.

I’ve also checked the box to “Enable load balancing.” This is an optional setting for In-Place upgrades but mandatory for Blue/Green deployments. With this checked, CodeDeploy will block traffic to the instances currently being deployed until they are done updating and then they’ll be re-added to the load balancer.

Now you must select a deployment configuration. This tells CodeDeploy how to update your servers. Out of the box you can have it do:

  • One at a time
  • Half at a time
  • All at once

You can also create your own configuration if you have custom requirements not met by the defaults. For this example, I’m doing one at a time. You’ll then need to select a service role that has access to the instances, which we created early on during this blog post. Click the “Create Application” button to move on.

You should get a nice green “Congratulations!” message when you’re done. This message is pretty helpful and shows you the next steps to pushing your application.

Push your Code to S3

OK, now I’m going to push my code to S3 so that I can store it in a ready to go package. To do this, I’m opening up my development machine (my Mac laptop) and I’m updating my code. I’ve got a few changes to my website that I’m making including a new graphic and new index.html page. Also, within this repo, I’m going to create an appspec.yml file which is how we tell CodeDeploy what we want to do with our new code. Take a look at the repo with my files, and the appspec.yml file.

On my mac, I’ve got a directory with my appspec.yml in the root and two folders, content and scripts. I’ve placed my images and html files in the content directory, and I’ve put two bash scripts in the scripts directory. The scripts are very simple and tell the apache server to start or stop, depending on which script is called.

Now take a look at the appspec.yml details. This file is broken down into sections. There is a “files” section that describes where the files should be placed on our web servers. For example, you an see that the content/image001.png file from my repo should be placed within the /var/www/html directory on the web server.

version: 0.0
os: linux 
files:
  - source: content/image001.png
    destination: /var/www/html/
  - source: content/index.html
    destination: /var/www/html/
hooks:
  ApplicationStop: 
    - location: scripts/stop_server.sh
      timeout: 30
      runas: root
  ApplicationStart:
    - location: scripts/start_server.sh
      timeout: 30
      runas: root

Below this, you’ll see a “hooks” section. The hooks tell CodeDeploy what to do during each of the lifecycle events that occur during an update. There are a bunch of them as shown below.

I don’t need to use each of the lifecycle events for this demonstration, so I’m only using ApplicationStop and ApplicationStart. In the appspec.yml file I’ve defined the user who should execute the scripts and the location of the script to run.

TIP: You may find that the very first time you deploy your code, the ApplicationStop script won’t run. This is because the code has never been downloaded before so it can’t run yet. Subsequent runs use the previously downloaded script so if you change this code, it may take one run before it actually works again.

Since our new application looks tip top, it’s time to push it to our S3 bucket in AWS. We’ll run the command shown to us in the console earlier and specify the source location of our files and a destination for our archive file.

aws deploy push \
  --application-name HollowWebApp \
  --description "Version1" \
  --ignore-hidden-files \
  --s3-location s3://mybucketnamehere/AppDeploy1.zip \
  --source .

My exact command is show below, along with the return information. The information returned, tells you how you can push the deployment to your servers, and I recommend using this information to do this. However, in this post, we’ll show what happens when pushing code from the console so we can easily see what happens.

Once the code has been successfully pushed we’ll do a quick check to show that the zip file is in our S3 bucket, and it is!

Deploy the New Version

As mentioned, you can now deploy your code to your servers from the command line response you got from pushing your code to S3. To make it more obvious about what happened, lets take a look at the CodeDeploy console now though. You’ll notice that if you open up your application that there is a “Revision” listed in your list. As you push more versions to your S3 bucket, the list will grow here.

We’re ready to deploy, so click the arrow next to your revision to expand the properties. Click the “Deploy revision” button to kick things off.

Most of the information should be filled out for you on the next page, but it does give you a nice opportunity to tweak something before the code gets pushed. I for example selected the “Overwrite Files” option so that when I push out a new index.html, it will overwrite the existing file and not fail the deployment because of an error.

As your deployment is kicked off you can watch the progress in the console. To get more information, click the Deployment ID to dig deeper.

When we drill down into the deployment, we can see that one of my servers is “In progress” while the other is pending. Since I’m doing one at a time, only one of the instances will update for now. To see even more information about this specific instance deploy, click the “View events” link.

When we look at the events, we can see each of the lifecycle events that the deployment goes through. I’ve waited for a bit to show you that each event was successful.

When we got back to the deployment screen, we see that one server is done, and the next server has started its progression.

When both servers have completed, I check my app again, and I can see that a new version has been deployed. (A slightly better, yet still awful version)

The post Using AWS CodeDeploy to Push New Versions of your Application appeared first on The IT Hollow.

]]>
https://theithollow.com/2018/08/06/using-aws-codedeploy-to-push-new-versions-of-your-application/feed/ 0 9053
How to Setup Amazon EKS with Mac Client https://theithollow.com/2018/07/31/how-to-setup-amazon-eks-with-mac-client/ https://theithollow.com/2018/07/31/how-to-setup-amazon-eks-with-mac-client/#respond Tue, 31 Jul 2018 14:06:02 +0000 https://theithollow.com/?p=9022 We love Kubernetes. It’s becoming a critical platform for us to manage our containers, but deploying Kubernetes clusters is pretty tedious. Luckily for us, cloud providers such as AWS are helping to take care of these tedious tasks so we can focus on what is more important to us, like building apps. This post shows […]

The post How to Setup Amazon EKS with Mac Client appeared first on The IT Hollow.

]]>
We love Kubernetes. It’s becoming a critical platform for us to manage our containers, but deploying Kubernetes clusters is pretty tedious. Luckily for us, cloud providers such as AWS are helping to take care of these tedious tasks so we can focus on what is more important to us, like building apps. This post shows how you can go from a basic AWS account to a Kubernetes cluster for you to deploy your applications.

EKS Environment Setup

To get started, we’ll need to deploy an IAM Role in our AWS account that has permissions to manage Kubernetes clusters on our behalf. Once that’s done, we’ll deploy a new VPC in our account to house our EKS cluster. To speed things up, I’ve created a CloudFormation template to deploy the IAM role for us, and to call the sample Amazon VPC template. You’ll need to fill in the parameters for your environment.

NOTE: Be sure you’re in a region that supports EKS. As of the time of this writing the US regions that can use EKS are us-west-2 (Oregon) and us-east-1 (N. Virginia).

AWSTemplateFormatVersion: 2010-09-09
Description: 'EKS Setup - IAM Roles and Control Plane Cluster'

Metadata:

  "AWS::CloudFormation::Interface":
    ParameterGroups:
      - Label:
          default: VPC
        Parameters:
          - VPCCIDR
      - Label:
          default: Subnets
        Parameters:
          - Subnet01Block
          - Subnet02Block
          - Subnet03Block

Parameters:

  VPCCIDR:
    Type: String
    Description: VPC CIDR Address
    AllowedPattern: "(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})/(\\d{1,2})"
    ConstraintDescription: "Must be a valid IP CIDR range of the form x.x.x.x/x."

  Subnet01Block:
    Type: String
    Description: Subnet01
    AllowedPattern: "(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})/(\\d{1,2})"
    ConstraintDescription: "Must be a valid IP CIDR range of the form x.x.x.x/x."

  Subnet02Block:
    Type: String
    Description: Subnet02
    AllowedPattern: "(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})/(\\d{1,2})"
    ConstraintDescription: "Must be a valid IP CIDR range of the form x.x.x.x/x."

  Subnet03Block:
    Type: String
    Description: Subnet03
    AllowedPattern: "(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})/(\\d{1,2})"
    ConstraintDescription: "Must be a valid IP CIDR range of the form x.x.x.x/x."


Resources:

  EKSRole:
    Type: "AWS::IAM::Role"
    Properties:
      AssumeRolePolicyDocument:
        Version: "2012-10-17"
        Statement:
          -
            Effect: "Allow"
            Principal:
              Service:
                - "eks.amazonaws.com"
            Action:
              - "sts:AssumeRole"
      ManagedPolicyArns:
        - arn:aws:iam::aws:policy/AmazonEKSClusterPolicy
        - arn:aws:iam::aws:policy/AmazonEKSServicePolicy
      Path: "/"
      RoleName: "EKSRole"

  EKSVPC:
    Type: "AWS::CloudFormation::Stack"
    Properties:
      Parameters:
        VpcBlock: !Ref VPCCIDR
        Subnet01Block: !Ref Subnet01Block
        Subnet02Block: !Ref Subnet02Block
        Subnet03Block: !Ref Subnet03Block
      TemplateURL: https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-06-05/amazon-eks-vpc-sample.yaml
      TimeoutInMinutes: 10


Outputs:

  EKSRole:
    Value: !Ref EKSRole

  StackRef:
    Value: !Ref EKSVPC

  EKSSecurityGroup:
    Value: !GetAtt EKSVPC.Outputs.SecurityGroups

  EKSVPC:
    Value: !GetAtt EKSVPC.Outputs.VpcId

  EKSSubnets:
    Value: !GetAtt EKSVPC.Outputs.SubnetIds

Use the template above and go to the AWS CloudFormation console within your account. Deploy the template and fill in your VPC addressing for a new VPC and subnets.

After you’ve deployed the template, you’ll see two stacks that have been created. An IAM role and the EKS VPC/subnets.

Once the stack has been completed take note of the outputs which will be used for creating the cluster.

 

Create the Amazon EKS Control Cluster

Now that we’ve got an environment to work with, its time to deploy the Amazon EKS control cluster. To do this go to the AWS Console and open the EKS Service.

 

When you open the EKS console, you’ll notice that you don’t have any clusters created yet. We’re about to change that. Click the “Create Cluster” button.

Fill out the information about your new cluster. Give it a name and select the version. Next select the IAM Role, VPC, Subnets and Security Groups for your Kubernetes Control Plane cluster. This info can be found in the outputs from your CloudFormation Template used to create the environment.

 

You will see that your cluster is being created. This may take some time, so you can continue with this post to make sure you’ve installed some of the other tools that you’ll need to manage the cluster.

Eventually your cluster will be created and you’ll see a screen like this:

Setup the Tools

You’ll need a few client tools installed in order to manage the Kubernetes cluster on EKS. You’ll need to have the following tools installed:

  • AWS CLI v1.15.32 or higher
  • Kubectl
  • Heptio-authenticator-aws

The instructions below are to install the tools on a Mac OS client.

  • AWS CLI – The easiest way to install the AWS CLI on a mac is to use homebrew. If you’ve already got homebrew installed on your Mac, then skip over this. Otherwise run the following from a terminal in order to install homebrew.

/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"

Once homebrew is installed, you can use it to install the aws cli by running:

brew install awscli

After the AWS CLI has been installed, you’ll need to configure it with your Access Keys, Secrete Keys, regions and outputs. You can start this process by running AWS Configure.

  • kubectl – Installing the Amazon EKS-vended kubectl binary, download the kubectl executable for Mac through your terminal.

curl -o kubectl https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-06-05/bin/darwin/amd64/kubectl
chmod +x ./kubectl
cp ./kubectl $HOME/bin/kubectl && export PATH=$HOME/bin:$PATH
echo 'export PATH=$HOME/bin:$PATH' >> ~/.bash_profile

We should be able to check to make sure kubectl is working properly by checking the version from the terminal

kubectl version -o yaml

The result should look something like this:

  • Heptio-authenticator-aws – The Heptio Authenticator is used to integrate your AWS IAM settings with your Kubernetes RBAC permissions. To install this, run the following from your terminal

curl -o heptio-authenticator-aws https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-06-05/bin/darwin/amd64/heptio-authenticator-aws
chmod +x ./heptio-authenticator-aws
cp ./heptio-authenticator-aws $HOME/bin/heptio-authenticator-aws && export PATH=$HOME/bin:$PATH
echo 'export PATH=$HOME/bin:$PATH' >> ~/.bash_profile

Configure kubectl for EKS

Now that the tools are setup and the control cluster is deployed, we need to configure our kubeconfig file to use with EKS. To do this, the first thing to do is determine the cluster endpoint. You can do this by clicking on your cluster in the GUI and noting the server endpoint and the certificate information.

Note: you could also find this through the aws cli by:

aws eks describe-cluster --name [clustername] --query cluster.endpoint

aws eks describe-cluster --name [clustername] --query cluster.certificateAuthority.data

Next, create a new directory called .kube if it doesn’t already exist. Once that’s done you’ll need to create a new file in that directory named “config-“[clustername]. So in my case I’ll create a file called “config-theithollowk8s”. Copy and paste the text below into the config file and modify the endpoint-url, base64-encoded-ca-cert and cluster name fields with your own information we’ve collected above. Also, you may un-comment (remove the “#” signs) the settings for an IAM role, and AWS Profile if you’re using named profiles for your AWS CLI configuration. Those fields are optional.

apiVersion: v1
clusters:
- cluster:
    server: <endpoint-url>
    certificate-authority-data: <base64-encoded-ca-cert>
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: aws
  name: aws
current-context: aws
kind: Config
preferences: {}
users:
- name: aws
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      command: heptio-authenticator-aws
      args:
        - "token"
        - "-i"
        - "<cluster-name>"
        # - "-r"
        # - "<role-arn>"
      # env:
        # - name: AWS_PROFILE
        #   value: "<aws-profile>"

After you’ve created the config file, you’ll want to add an environment variable to the kubectl will know where to find the cluster configuration. On windows this can be done by running the command below. Substitute your own file paths for the config file that you created.

export KUBECONFIG=$KUBECONFIG:~/.kube/config-[clustername]

If things are working correctly, we can run kubectl config get-contexts so we can see the AWS authentication is working. I’ve also run kubectl get svc to show that we can read from the EKS cluster.

Deploy EKS Worker Nodes

Your control cluster is up and running, and we’ve got our clients connected through the Heptio-authenticator. Now its time to deploy some worker nodes for our containers to run on. To do this, go back to your cloudformation console and deploy the following CFn tempalate that is provided by AWS.

https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-06-05/amazon-eks-nodegroup.yaml

 

Fill out the deployment information. You’ll need to enter a stack name, number of min/max nodes to scale within, an SSH Key to use, the VPC we deployed earlier, and the subnets. You will also need to enter a ClusterName which must be exactly the same as our control plane cluster we deployed earlier. Also, the NodeImageID must be one of the following, depending on your region:

  • US-East-1 (N. Virginia) – ami-dea4d5a1
  • US-West-2 (Oregon) – ami-73a6e20b

Deploy your CloudFormation template and wait for it to complete. Once the stack completes, you’ll need to look at the outputs to get the NodeInstanceRole.

The last step is to ensure that our nodes have permissions and can join the cluster. Use the file format below and save it as aws-auth-cm.yaml.

apiVersion: v1
kind: ConfigMap
metadata:
  name: aws-auth
  namespace: kube-system
data:
  mapRoles: |
    - rolearn: <ARN of instance role (not instance profile)>
      username: system:node:{{EC2PrivateDNSName}}
      groups:
        - system:bootstrappers
        - system:nodes

Replace ONLY the <ARN of instance role (not instance profile)> section with the NodeInstanceRole we got from the outputs of our CloudFormation Stack. Save the file and then apply the configmap to your EKS cluster by running:

kubectl apply -f aws-auth-cm.yaml

After we run the command our cluster should be fully working. We can run “get nodes” to see the worker nodes listed in the cluster.

NOTE: the status of the cluster will initially show “NotReady” re-running the command or using the –watch switch will let you see when the nodes are fully provisioned.

Deploy Your Apps

Congratulations, you’ve build your Kubernetes cluster on Amazon EKS. Its time to deploy your apps which is outside of this blog post. If you want to try out an app to prove that its working, try one of the deployments from the kubernetes.io tutorials such as their guestbook app.

kubectl apply -f https://k8s.io/examples/application/guestbook/redis-master-deployment.yaml

From here on out, it’s up to you. Start deploying your replication controllers, pods, services, etc as you’d like.

 

The post How to Setup Amazon EKS with Mac Client appeared first on The IT Hollow.

]]>
https://theithollow.com/2018/07/31/how-to-setup-amazon-eks-with-mac-client/feed/ 0 9022
How to Setup Amazon EKS with Windows Client https://theithollow.com/2018/07/30/how-to-setup-amazon-eks-with-windows-client/ https://theithollow.com/2018/07/30/how-to-setup-amazon-eks-with-windows-client/#respond Mon, 30 Jul 2018 16:05:09 +0000 https://theithollow.com/?p=8980 We love Kubernetes. It’s becoming a critical platform for us to manage our containers, but deploying Kubernetes clusters is pretty tedious. Luckily for us, cloud providers such as AWS are helping to take care of these tedious tasks so we can focus on what is more important to us, like building apps. This post shows […]

The post How to Setup Amazon EKS with Windows Client appeared first on The IT Hollow.

]]>
We love Kubernetes. It’s becoming a critical platform for us to manage our containers, but deploying Kubernetes clusters is pretty tedious. Luckily for us, cloud providers such as AWS are helping to take care of these tedious tasks so we can focus on what is more important to us, like building apps. This post shows how you can go from a basic AWS account to a Kubernetes cluster for you to deploy your applications.

EKS Environment Setup

To get started, we’ll need to deploy an IAM Role in our AWS account that has permissions to manage Kubernetes clusters on our behalf. Once that’s done, we’ll deploy a new VPC in our account to house our EKS cluster. To speed things up, I’ve created a CloudFormation template to deploy the IAM role for us, and to call the sample Amazon VPC template to deploy a VPC. You’ll need to fill in the parameters for your environment.

NOTE: Be sure that you’re in a region that supports EKS. As of the time of this writing the US regions that can use EKS are us-west-2 (Oregon) and us-east-1 (N. Virginia).

AWSTemplateFormatVersion: 2010-09-09
Description: 'EKS Setup - IAM Roles and Control Plane Cluster'

Metadata:

  "AWS::CloudFormation::Interface":
    ParameterGroups:
      - Label:
          default: VPC
        Parameters:
          - VPCCIDR
      - Label:
          default: Subnets
        Parameters:
          - Subnet01Block
          - Subnet02Block
          - Subnet03Block

Parameters:

  VPCCIDR:
    Type: String
    Description: VPC CIDR Address
    AllowedPattern: "(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})/(\\d{1,2})"
    ConstraintDescription: "Must be a valid IP CIDR range of the form x.x.x.x/x."

  Subnet01Block:
    Type: String
    Description: Subnet01
    AllowedPattern: "(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})/(\\d{1,2})"
    ConstraintDescription: "Must be a valid IP CIDR range of the form x.x.x.x/x."

  Subnet02Block:
    Type: String
    Description: Subnet02
    AllowedPattern: "(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})/(\\d{1,2})"
    ConstraintDescription: "Must be a valid IP CIDR range of the form x.x.x.x/x."

  Subnet03Block:
    Type: String
    Description: Subnet03
    AllowedPattern: "(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})/(\\d{1,2})"
    ConstraintDescription: "Must be a valid IP CIDR range of the form x.x.x.x/x."


Resources:

  EKSRole:
    Type: "AWS::IAM::Role"
    Properties:
      AssumeRolePolicyDocument:
        Version: "2012-10-17"
        Statement:
          -
            Effect: "Allow"
            Principal:
              Service:
                - "eks.amazonaws.com"
            Action:
              - "sts:AssumeRole"
      ManagedPolicyArns:
        - arn:aws:iam::aws:policy/AmazonEKSClusterPolicy
        - arn:aws:iam::aws:policy/AmazonEKSServicePolicy
      Path: "/"
      RoleName: "EKSRole"

  EKSVPC:
    Type: "AWS::CloudFormation::Stack"
    Properties:
      Parameters:
        VpcBlock: !Ref VPCCIDR
        Subnet01Block: !Ref Subnet01Block
        Subnet02Block: !Ref Subnet02Block
        Subnet03Block: !Ref Subnet03Block
      TemplateURL: https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-06-05/amazon-eks-vpc-sample.yaml
      TimeoutInMinutes: 10


Outputs:

  EKSRole:
    Value: !Ref EKSRole

  StackRef:
    Value: !Ref EKSVPC

  EKSSecurityGroup:
    Value: !GetAtt EKSVPC.Outputs.SecurityGroups

  EKSVPC:
    Value: !GetAtt EKSVPC.Outputs.VpcId

  EKSSubnets:
    Value: !GetAtt EKSVPC.Outputs.SubnetIds

Use the template above and go to the AWS CloudFormation console within your account. Deploy the template and fill in your VPC addressing for a new VPC and subnets.

After you’ve deployed the template, you’ll see two stacks that have been created. An IAM role and the EKS VPC/subnets.

Once the stack has been completed take note of the outputs which will be used for creating the cluster.

 

Create the Amazon EKS Control Cluster

Now that we’ve got an environment to work with, its time to deploy the Amazon EKS control cluster. To do this go to the AWS Console and open the EKS Service.

 

When you open the EKS console, you’ll notice that you don’t have any clusters created yet. We’re about to change that. Click the “Create Cluster” button.

Fill out the information about your new cluster. Give it a name and select the version. Next select the IAM Role, VPC, Subnets and Security Groups for your Kubernetes Control Plane cluster. This info can be found in the outputs from your CloudFormation Template used to create the environment.

 

You will see that your cluster is being created. This may take some time, so you can continue with this post to make sure you’ve installed some of the other tools that you’ll need to manage the cluster.

Eventually your cluster will be created and you’ll see a screen like this:

Setup the Tools

You’ll need a few client tools installed in order to manage the Kubernetes cluster on EKS. You’ll need to have the following tools installed:

  • AWS CLI v1.15.32 or higher
  • Kubectl
  • Heptio-authenticator-aws

The instructions below are to install the tools on a Windows client.

  • AWS CLI – To install the AWS CLI, download and run the installer for your version of windows. 64-bit version , 32-bit version. Once you’ve completed running the installer, you’ll need to configure your client with the appropriate settings such as region, access_keys, secret_keys and an output format. This can be accomplished by opening up a cmd prompt and running aws configure. Enter access keys and secret keys with permissions to your AWS resources.

kubectl version -o yaml

The result should look something like this:

heptio-authenticator-aws --help

The result of that command should return info about your options.

 

Configure kubectl for EKS

Now that the tools are setup and the control cluster is deployed, we need to configure our kubeconfig file to use with EKS. To do this, the first thing to do is determine the cluster endpoint. You can do this by clicking on your cluster in the GUI and noting the server endpoint and the certificate information.

Note: you could also find this through the aws cli by:

aws eks describe-cluster --name [clustername] --query cluster.endpoint

aws eks describe-cluster --name [clustername] --query cluster.certificateAuthority.data

Next, create a new directory called .kube, if it doesn’t already exist. Once that’s done you’ll need to create a new file in that directory named “config-“[clustername]. So in my case I’ll create a file called “config-theithollowk8s”. Copy and paste the text below into the config file and modify the endpoint-url, base64-encoded-ca-cert and cluster name fields with your own information we’ve collected above. Also, you may un-comment (remove the “#” signs) the settings for an IAM role, and AWS Profile if you’re using named profiles for your AWS CLI configuration. Those fields are optional.

apiVersion: v1
clusters:
- cluster:
    server: <endpoint-url>
    certificate-authority-data: <base64-encoded-ca-cert>
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: aws
  name: aws
current-context: aws
kind: Config
preferences: {}
users:
- name: aws
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      command: heptio-authenticator-aws
      args:
        - "token"
        - "-i"
        - "<cluster-name>"
        # - "-r"
        # - "<role-arn>"
      # env:
        # - name: AWS_PROFILE
        #   value: "<aws-profile>"

After you’ve created the config file, you’ll want to add an environment variable to the kubectl will know where to find the cluster configuration. On Windows this can be done by running the command below. Substitute your own file paths for the config file that you created.

set KUBECONFIG=C:\FilePATH\.kube\config-theithollowk8s.txt

If things are working correctly, we can run kubectl config get-contexts so we can see the AWS authentication is working. I’ve also run kubectl get svc to show that we can read from the EKS cluster.

Deploy EKS Worker Nodes

Our control cluster is up and running, and we’ve got our clients connected through the Heptio-authenticator. Now its time to deploy some worker nodes for our containers to run on. To do this, go back to your CloudFormation console and deploy the following CFn template that is provided by AWS.

https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-06-05/amazon-eks-nodegroup.yaml

 

Fill out the deployment information. You’ll need to enter a stack name, number of min/max nodes to scale within, an SSH Key to use, the VPC we deployed earlier, and the subnets. You will also need to enter a ClusterName which must be exactly the same as our control plane cluster we deployed earlier. Lastly, the NodeImageID must be one of the following, depending on your region:

  • US-East-1 (N. Virginia) – ami-dea4d5a1
  • US-West-2 (Oregon) – ami-73a6e20b

Deploy your CloudFormation template and wait for it to complete. Once the stack completes, you’ll need to look at the outputs to get the NodeInstanceRole.

The last step is to ensure that our nodes have permissions and can join the cluster. Use the file format below and save it as aws-auth-cm.yaml.

apiVersion: v1
kind: ConfigMap
metadata:
  name: aws-auth
  namespace: kube-system
data:
  mapRoles: |
    - rolearn: <ARN of instance role (not instance profile)>
      username: system:node:{{EC2PrivateDNSName}}
      groups:
        - system:bootstrappers
        - system:nodes

Replace ONLY the <ARN of instance role (not instance profile)> section with the NodeInstanceRole we got from the outputs of our CloudFormation stack. Save the file and then apply the configmap to your EKS cluster by running:

kubectl apply -f aws-auth-cm.yaml

After we run the command our cluster should be fully working. We can run “get nodes” to see the worker nodes listed in the cluster.

NOTE: the status of the cluster will initially show “NotReady”. Re-running the command or using the –watch switch will let you see when the nodes are fully provisioned.

Deploy Your Apps

Congratulations, you’ve build your Kubernetes cluster on Amazon EKS. Its time to deploy your apps which is outside of this blog post. If you want to try out an app to prove that its working, try one of the deployments from the kubernetes.io tutorials such as their guestbook app.

kubectl apply -f https://k8s.io/examples/application/guestbook/redis-master-deployment.yaml

From here on out, it’s up to you. Start deploying your replication controllers, pods, services, etc as you’d like.

 

The post How to Setup Amazon EKS with Windows Client appeared first on The IT Hollow.

]]>
https://theithollow.com/2018/07/30/how-to-setup-amazon-eks-with-windows-client/feed/ 0 8980
Easy Snapshot Automation with Amazon Data Lifecycle Manager https://theithollow.com/2018/07/23/easy-snapshot-automation-with-amazon-data-lifecycle-manager/ https://theithollow.com/2018/07/23/easy-snapshot-automation-with-amazon-data-lifecycle-manager/#respond Mon, 23 Jul 2018 14:05:53 +0000 https://theithollow.com/?p=8964 Amazon has announced a new service that will help customers manage their EBS volume snapshots in a very simple manner. The Data Lifecycle Manager service lets you setup a schedule to snapshot any of your EBS volumes during a specified time window. In the past, AWS customers might need to come up with their own […]

The post Easy Snapshot Automation with Amazon Data Lifecycle Manager appeared first on The IT Hollow.

]]>
Amazon has announced a new service that will help customers manage their EBS volume snapshots in a very simple manner. The Data Lifecycle Manager service lets you setup a schedule to snapshot any of your EBS volumes during a specified time window.

In the past, AWS customers might need to come up with their own solution for snapshots or backups. Some apps moving to the cloud might not even need backups based on their deployment method and architectures. For everything else, we assume we’ll need to at least snapshot the EBS volumes that the EC2 instances are running on. Prior to the Data Lifecycle Manager, this could be accomplished through some fairly simple Lambda functions to snapshot volumes on a schedule. Now with the new service, there is a solution right in the EC2 console.

Using the Data Lifecycle Manager

To begin using the new service, open the EC2 console in your AWS account. If this is the first time using it, you’ll click the “Create Snapshot Lifecycle Policy” button to get started.

We’ll create a new policy which defines what volumes should be snapshotted and when to take these snapshots. First, give the policy a description so you’ll be able to recognize it later. The next piece is to identify which volume should be snapshotted. This is done using a tag on the volume (not the EC2 instance its connected to). I’ve used a method that snapshots EBS volumes with a tag key of “snap” and a tag value of “true”.

Next, we’ll need to define the schedule in which the volumes will be snapshotted. Give that schedule a name and then specify how often the snapshots will be taken. In this example, I’m taking a snapshot every 12 hours. The first snapshots need to know when to be initiated. Be sure to note that this time is UTC time, so do your conversions before you start with this process. After this, you’ll need to specify how many of the snapshots to keep. Its a bad idea to start taking lots of snapshots and not deleting them ever, especially in the cloud where you can keep as many as you’d like if you can stomach the bill.

Note: The snapshot start time is a general start time. The snapshots will be taken sometime within the hour you specify, but don’t expect that it will be immediately at this time.

 

You’ll also have the option to tag your snapshots. It probably makes sense to tag them somehow so that you know which ones might have been taken manually, and which were automated through the Data Lifecycle Manager. I’ve tagged mine with a key name LifeCycleManager and a value of true.

 

Lastly, you’ll need a role created that has permissions to create and delete these snapshots. Luckily, there is a “Default role” option in the console that will create this for you. Otherwise you can specify the role yourself.

After you create the policy, you’ll see it listed in your console. Its also worth noting that you could have multiple policies affecting the same volumes. For instance if you wanted to take snapshots every 6 hours, maybe you create a pair of policies since the highest frequency of snapshots that is available, is currently twelve hours.

The Results

If you wait for a bit, your snapshots should be taken you’ll notice that any of your EBS volumes that were properly tagged will be snapshotted. You can also see in the screenshot below, that the snapshot has the tag that I specified along with a few others that identify which policy created the snapshot.

Summary

The Data Lifecycle Manager service from AWS might not seem like a big deal, but its a lot nicer than having to write your own Lambda functions to snapshot and delete the snapshots on a schedule. Don’t worry though, you might still get some use out of your old Lambda code if you want to customize your snapshot methods or do something like create an AMI. If you’re just looking for the basics, try out the Data Lifecycle Manager. Right now you can test this out yourself in the N. Virginia, Oregon, and Ireleand regions through the AWS console or through the CLI. I expect this will be available in other regions and through CloudFormation shortly as well.

The post Easy Snapshot Automation with Amazon Data Lifecycle Manager appeared first on The IT Hollow.

]]>
https://theithollow.com/2018/07/23/easy-snapshot-automation-with-amazon-data-lifecycle-manager/feed/ 0 8964