The IT Hollow https://theithollow.com Mon, 19 Feb 2018 15:10:10 +0000 en-US hourly 1 https://wordpress.org/?v=4.9.4 44921653 AWS Reserved Instance Considerations https://theithollow.com/2018/02/19/aws-reserved-instance-considerations/ https://theithollow.com/2018/02/19/aws-reserved-instance-considerations/#respond Mon, 19 Feb 2018 15:10:10 +0000 https://theithollow.com/?p=8585 Reserved Instances are often used to reduce the price of Amazon EC2 instance on-demand pricing. If you’re not familiar with Reserved Instances, then you’re missing out. Reserved Instances, or RIs, are a billing construct used in conjunction with Amazon EC2 instances (virtual machines). The default usage on the AWS platform is the on-demand pricing in […]

The post AWS Reserved Instance Considerations appeared first on The IT Hollow.

]]>
Reserved Instances are often used to reduce the price of Amazon EC2 instance on-demand pricing. If you’re not familiar with Reserved Instances, then you’re missing out. Reserved Instances, or RIs, are a billing construct used in conjunction with Amazon EC2 instances (virtual machines). The default usage on the AWS platform is the on-demand pricing in which you get billed by the hour or second with no commitments. Basically, when you decide to terminate an instance you stop paying for it.

Reserved Instances comes into play when you have EC2 instances that will be running all the time and you can’t terminate them each night to save money. Reserved instances let you pay a reduced price per hour for your instances, but you commit to paying for the entire year or for three years depending on the type of RI you purchase. These price decreases can be pretty significant cost savings (up to 75%) but remember that you’re committing to a fixed length so if you don’t need those instances any longer, you’re still paying for the RI.

Be Aware of How RIs Work

There are a few things that you need to know about RIs before you consider using them. Reserved Instances are purchased with a list of attributes for a class, Instance type, Platform, Scope, tenancy and term.

  • Class – Convertible or Standard
  • Instance type – The instance family and instance size such as an m4.large.
  • Platform – The operating system
  • Scope – RIs can be purchased for a region or for a specific Availability Zone.
  • Tenancy – Shared Tenancy (default) or instances running on dedicated hardware

Why do we care about these attributes? Because you can’t transfer a Reserved Instance from one instance to another with differing attributes. For example a Reserved Instance will be applied to any instance that matches all of the attributes. If you have a single RI that matches two different instances, it will be applied to one of them. If you terminate one of those instances, the RI will be applied to the second instance automatically. However, if you were to terminate an instance to change the size, the type will no longer patch and the RI won’t be applied unless there is another instance with the same attributes. There is some help here though, if you purchase a Convertible RI, you can modify the RI attributes through the API or console but note that convertible RIs provide a smaller discount for this flexibility.

RIs also provide you insurance on capacity. With On-Demand instances there is no guarantee that capacity will exist within a Region or AZ. For example, if an entire region went down and you needed to spin up your instances in another region, you may be fighting with other customers for the limited capacity in that region. If you purchased an RI in that region, you are guaranteed to have resources to spin up your instances.

Considerations

Now that we understand the basics behind RIs, let’s explore some considerations that you should take into account before deciding to purchase an AWS Reserved Instance.

Find the Steady State

First and foremost, if you’re just starting to migrated servers to AWS or deploying new workloads, don’t purchase an RI right away. You’ll likely find out that the instance size you picked needs to be scaled either up or down. While this is easy to do within AWS, an RI is tied to an instance type so changing the family or size will have pricing consequences for you. So ensure you’ve run your workloads for a while and have a good understanding of the capacity it needs to use for a steady state before purchasing your RIs.

 

Use Shared Reserved Instance Sharing in Multi-Account Environments

If you have multiple accounts setup with a root level master billing account, consider whether or not to purchase the RIs at the root account or one of the subordinate accounts. If you purchase the RI at the root account level the RI can be shared across any of the other accounts within the organization. This is incredibly useful for ensuring that the pool of resources is very large so RIs can be allocated to any instance with attributes that match across any account. However, in some situations those individual accounts are owned by their own business unit and have their own budgets. In those cases you may want to turn off Reserved Instance sharing so you can specify which account gets the RI, and the benefits. Be aware here though, that if that account doesn’t use the RI, the other accounts can’t use it either.

Determine the Criticality of your Workload

If the workload you’re running in AWS is mission critical you may want to consider an RI to ensure that no matter what, AWS will have resources available for you to spin that instance up if an AZ fails, or a region fails. Having snapshots, backups, AMIs etc replicated to other regions and infrastructure as code readily available to re-deploy applications is great. But don’t forget that AWS has finite resources as well, even if you don’t see the cloud this way. Don’t let an “Insufficient Capacity” error prevent your perfectly architected business continuity strategy from working correctly.

Don’t Purchase an RI for Every Instance

The public cloud provides a lot of options for elasticity. With the use of auto-scaling groups and load balancers we can automatically scale out our applications based on demand and then reduce them later when the demand subsides. When purchasing an RI consider buying for the steady state and use On-Demand (or spot pricing) for any of the additional capacity that might only be temporary.

Use One-Year RIs

Reserved Instances come in one year or three year options. Three year RIs give you pretty good cost reduction, but consider that AWS is always releasing new instance sizes and families. You might want to take advantage of these but if you have a three year RI this might be more difficult. Consider using the one-year RIs to give you the flexibility to re-evaluate your sizing each year.

If you Goof Up

So what do you do if you accidentally purchased an RI for the wrong type or something. Well, If you contact AWS Support they might let you return it or swap it out for the right type but you can’t count on that. From what I’ve been told this is usually accepted one time but any future exceptions you should expect to own. There is an RI marketplace where you can sell your RIs if you no longer need them. Think of this like the craigslist of AWS RIs where you can buy and sell RIs based on your situation.

The post AWS Reserved Instance Considerations appeared first on The IT Hollow.

]]>
https://theithollow.com/2018/02/19/aws-reserved-instance-considerations/feed/ 0 8585
Setup MFA for AWS Root Accounts https://theithollow.com/2018/02/12/setup-mfa-aws-root-accounts/ https://theithollow.com/2018/02/12/setup-mfa-aws-root-accounts/#respond Mon, 12 Feb 2018 15:07:56 +0000 https://theithollow.com/?p=8568 Multi-Factor Authentication or MFA, is a common security precaution used to prevent someone from gaining access to an account even if an attacker has your username and password. With MFA you must also have a device that generates a time based one time password (TOTP) in addition to the standard username/password combination. The extra time […]

The post Setup MFA for AWS Root Accounts appeared first on The IT Hollow.

]]>
Multi-Factor Authentication or MFA, is a common security precaution used to prevent someone from gaining access to an account even if an attacker has your username and password. With MFA you must also have a device that generates a time based one time password (TOTP) in addition to the standard username/password combination. The extra time it might take to login is well worth the advantages that MFA provides. Having your AWS account hijacked could be a real headache.

Setting up MFA

Setting it up is pretty easy. First login to your AWS account. I’m security my root user account so I’m going to the IAM console and looking at the dashboard. As you can see I have a very unsightly orange/yellow exclamation mark in my security status dashboard. If you’re like me, we can’t have any of those hanging around. Click the dropdown and then select the “Manage MFA” button.

A dialogue window will pop up asking what kind of device to use. I’m using my smartphone which isn’t quite as secure as a hardware MFA device, but still pretty good. Click the “Next Step” button.

The next screen is just letting you know that you’ll need to install an AWS MFA compatible application on your phone. I’m using OKTA Verify which works fine, but there are others you can download such as Google Authenticator or Authy. For more information on this see: https://aws.amazon.com/iam/details/mfa/.

Once you’ve installed an MFA app on your phone click the “Next Step” button.

On the next screen you’ll be given a QR code to scan with your MFA application that was just installed on your device. Scan the QR code that pops up on the screen. After this, you’ll need to put in the next two codes that your MFA app provides so that it sync’s with AWS. When you’re done with this step, click the “Activate Virtual MFA” button.

NOTE: You should screenshot the QR code and store it in a SAFE PLACE! If you lose your phone or something or need to reset your MFA, you’ll need to re-scan this QR code. You can’t retrieve this QR code again from the AWS console so it’s up to you to store it in a safe place. I store mine within my 1Password vault along with my AWS credentials.

If all went well you’ll get a success message.

If we look at the IAM dashboard again, we’ll see our green checkmarks and we’re good to go.

“If the light is green, the trap is clean” -Ray Stanz

Result

When we go to login to the AWS account the next time, we’ll be prompted to enter in a one time password after entering the username and password.

 

 

 

 

 

 

The post Setup MFA for AWS Root Accounts appeared first on The IT Hollow.

]]>
https://theithollow.com/2018/02/12/setup-mfa-aws-root-accounts/feed/ 0 8568
Rubrik Acquires Datos IO https://theithollow.com/2018/02/06/rubrik-acquires-datos-io/ https://theithollow.com/2018/02/06/rubrik-acquires-datos-io/#comments Tue, 06 Feb 2018 14:02:17 +0000 https://theithollow.com/?p=8580 There is news in the backup world today. Rubrik has acquired startup company Datos IO. Who is Datos IO? Datos IO was founded in 2014 and focuses on copy data management of distributed scale out databases purpose built for the cloud. The reason Datos IO is different from the common backup solutions we’re accustomed to […]

The post Rubrik Acquires Datos IO appeared first on The IT Hollow.

]]>
There is news in the backup world today. Rubrik has acquired startup company Datos IO.

Who is Datos IO?

Datos IO was founded in 2014 and focuses on copy data management of distributed scale out databases purpose built for the cloud. The reason Datos IO is different from the common backup solutions we’re accustomed to seeing (Commvault, DataDomain, etc) is that they are building a solution from the ground up that tackles the problems of geo-dispersed scale out database which are becoming commonplace in the cloud world. Think about databases that scale multiple continents, and multiple clouds even.

According to Datos IO’s own website:

Datos IO provides the industry’s first cloud-scale, application-centric, data management platform enabling organizations to protect, mobilize, and monetize all their application data across private cloud, hybrid cloud, and public cloud environments.

Datos IO has several products including their RecoverX product which can leverage datacenter aware backups of distributed MongoDB databases that are shared across continents. The RecoverX product can be used to backup a sharded database from within a single geographic location which can be a real struggle especially with General Data Protection Regulations (GDPR).

In late November of 2017, Datos IO was added to CRN’s 2017 list of emerging vendors in the storage category for their RecoverX product. Datos IO also received bronze in the Enterprise Software category of the best in biz awards.

What Does Rubrik Want With Another Backup Company?

Rubrik has had a bit of a meteoric rise since it was (also) founded in 2014. While Datos IO was still in it’s seed funding round according to Crunchbase, Rubrik has had several rounds of funding. Rubrik’s cash rich bank accounts, might have been burning a hole in their pockets but my guess is that CEO Bipul Sinha has a very strategic vision for how this company can help the Rubrik brand, in the cloud marketplace.

The backup solution that Datos IO provides seems like a pretty natural fit with the existing Rubrik products. Both companies were founded in 2014 and looking to revolutionize the way traditional backups were completed. Both companies use native cloud storage for archival or backup. Both companies have cloud based backup solutions that require no hardware.

The acquisition of Datos IO should extend Rubrik’s platform to include protecting mission-critical cloud applications and NoSQL databases. Rubrik is likely trying to bolster their existing product with proven technology for backing up geo-distributed databases, giving them a good one-two punch in the cloud backup market. Time will tell how this acquisition will pan out for the two companies.

The post Rubrik Acquires Datos IO appeared first on The IT Hollow.

]]>
https://theithollow.com/2018/02/06/rubrik-acquires-datos-io/feed/ 1 8580
Add a New AWS Account to an Existing Organization from the CLI https://theithollow.com/2018/02/05/add-new-aws-account-existing-organization-cli/ https://theithollow.com/2018/02/05/add-new-aws-account-existing-organization-cli/#respond Mon, 05 Feb 2018 15:12:17 +0000 https://theithollow.com/?p=8551 AWS Organizations is a way for you to organize your accounts and have a hierarchy not only for bills to roll up to a single paying account, but also to setup a way to add new accounts programatically. For the purposes of this discussion, take a look at my AWS lab account structure.   From […]

The post Add a New AWS Account to an Existing Organization from the CLI appeared first on The IT Hollow.

]]>
AWS Organizations is a way for you to organize your accounts and have a hierarchy not only for bills to roll up to a single paying account, but also to setup a way to add new accounts programatically.

For the purposes of this discussion, take a look at my AWS lab account structure.

 

From the AWS Organizations Console we can see the account structure as well.

 

I need to create a new account in a new OU under my master billing account. This can be accomplished through the console, but it can also be done through the AWS CLI, which is what I’ll do here. NOTE: This can be done through the API as well which can be really useful for automating the building of new accounts.

Permissions Prep

Before we start issuing commands there are some pre-requisites that need to be met first. To begin, we’ll need to have a login that has permissions for Organizations:CreateAccount. Since I’ll be doing additional work such as moving accounts around and Creating OUs I’ve created an AWS policy for OrganizationalAdmins and given my user full permissions on the organizations service.

I also want to mention that our CLI connection must be made to the root account within AWS organizations.

Create A New Account

Now that we’ve got our permissions taken care of, open up a terminal and connect to the Master Billing Account as the user who has permissions to create accounts and modify organizations.

From here we’re going to run our AWS CLI command to create a new account.

aws organizations create-account -email user@domain.com --account-name [name]

Here is a screenshot of what happened when I created my account.

This command starts the account creation build and as you can see some return data comes back and shows the status is “IN_PROGRESS”.

If we want to check the status of the account creation we can run the following command and insert the requestID which was returned in the create-account command.

aws organizations describe-create-account-status --create-account-request-id [requestid]

Here is my screenshot of the describe command. We can see that when I checked it again, the status was “SUCCEEDED”

 

At this point the account has been created. You should get an email to the address specified with further instructions but also, the account should be built with a role named “OrganizationAccountAccessRole” which will allow you to do a role switch into that account from the root account.

Create a New OU

Now that the account has been created we need to create a new OU. To do this from the AWS CLI we’ll use the “create-organizational-unit” command, but first we need to find out the ID of the root.

To find the root ID run the following:

aws organizations list-roots

Once you run that command it should return a list of the root accounts. In my case there is only one root and we’re looking for an id that begins with “r” and has at least 4 characters after it.

We will now create the new OU by providing it a new name and also passing in the root as the parent.

aws organizations create-organizational-unit --parent-id [parentID] --name [Name]

Here is a screenshot of my commands for both listing the roots and adding a new OU under the root.

 

Move Account into new OU

Now that the account and OU are created, we can move the account into the appropriate OU. To do this we’ll use the “move-account” command. You’ll need to pass in the account ID, parent ID (the root we found earlier beginning with “r”) and the OU ID which was returned in the create-organizational-unit command that begins with “ou”

aws organizations move-account --account-id [accountID] --source-parent-id [rootID] -destination-parent-id [OU ID]

Here is a screenshot of my commands in the CLI.

 

Results

Now if we look back in our AWS Console we’ll see our new account created and listed under the appropriate OU just as we were hoping for.

 

 

The post Add a New AWS Account to an Existing Organization from the CLI appeared first on The IT Hollow.

]]>
https://theithollow.com/2018/02/05/add-new-aws-account-existing-organization-cli/feed/ 0 8551
Using Change Sets with Nested CloudFormation Stacks https://theithollow.com/2018/01/29/using-change-sets-nested-cloudformation-stacks/ https://theithollow.com/2018/01/29/using-change-sets-nested-cloudformation-stacks/#respond Mon, 29 Jan 2018 15:10:20 +0000 https://theithollow.com/?p=8527 In a previous post, we looked at how to use change sets with CloudFormation. This post covers how to use change sets with a nested CloudFormation Stack. If you’re not familiar with nested CloudFormation stacks, it is just what it sounds like. A root stack or top level stack will call subordinate or child stacks […]

The post Using Change Sets with Nested CloudFormation Stacks appeared first on The IT Hollow.

]]>
In a previous post, we looked at how to use change sets with CloudFormation. This post covers how to use change sets with a nested CloudFormation Stack.

If you’re not familiar with nested CloudFormation stacks, it is just what it sounds like. A root stack or top level stack will call subordinate or child stacks as part of the deployment. These nested stacks could be deployed as a standalone stack or they can be tied together by using the AWS::CloudFormation::Stack resource type. Nested stacks can be used to deploy entire environments from the individual stacks below it. In fact a root stack may not deploy any resources at all other than what comes from the nested stacks. An example of a commons stacking method might be to have a top level stack that deploys a VPC, while a nested stack is responsible for deploying subnets within that stack. You could keep chaining this together to deploy EC2 instances, S3 buckets or whatever you’d like.

Deploying a Change Set for a Nested Stack

As you can see from the first screenshot below, I have a nested stack that deployed a subnets stack and the root stack is my VPC-Deploy stack. For this example, lets assume I need to modify my subnets. Pretend for a second, I goofed up the IP addressing. Now from our previous post, you might just go and assign a change set to the subnets stack, but since it is part of a nested stack we want to make sure not to break that chain. Remember if we later need to update the VPC stack, we’ll want to make sure that we don’t break the nested stack as well.

Here’s the problem. If we attempt to deploy a change set on our VPC-Deploy stack, one of the changes listed will be to subnets, but you won’t see what those changes are. To test and make sure that only the changes you want to be made are staged, let’s perform the first parts of deploying a changes set on our subnets stack, but we WILL NOT execute it.

So as we’ve done previously, create a new change set for our nested stack.

NOTE: When you attempt to deploy a change set on a nested stack a warning message will pop up reminding you that this could cause your stack to be out of sync since it’s a nested stack. Continue through this process, but remember not to deploy the change.

Select your template.

Add your details.

Add your tags and set the permissions appropriately.

Review and click “Create Change Set” button.

When we review the changes we can see that I’ll be replacing two subnets and two route tables.

At this point we know that the changes that we had wanted to make are reflected correctly in CloudFormation. If we saw changes in the previous screen that we didn’t want to make, we would know it at this point.

Really Deploy a Change Set at the Root Stack

Now that we’ve tested that, let’s actually set the change set on our root stack. Select it from the list and go through the same process with our root template. Remember that the child template is probably the only thing that changed, so the root template should be the same.

Once we go through all of the screens, notice what the stack update looks like. Here we see a single change, not on the subnets and the routing tables themselves, but rather the subnets CloudFormation stack. This is the reason we pushed the change set to the child stack first, so we could see what those changes would be. The root stack update makes this visibility difficult.

Deploy the change set on the root stack and the nested stacks should be updated accordingly.

 

 

Summary

Using change sets on a nested CloudFormation stack isn’t much different from using them on a stand alone stack, but in order to get the same visibility, testing them out but not deploying them, on the nested stack is an easy way to achieve this. Just be careful not to deploy them so that the root and child stacks don’t get out of sync.

The post Using Change Sets with Nested CloudFormation Stacks appeared first on The IT Hollow.

]]>
https://theithollow.com/2018/01/29/using-change-sets-nested-cloudformation-stacks/feed/ 0 8527
An Introduction to AWS CloudFormation Change Sets https://theithollow.com/2018/01/22/introduction-aws-cloudformation-change-sets/ https://theithollow.com/2018/01/22/introduction-aws-cloudformation-change-sets/#comments Mon, 22 Jan 2018 15:05:12 +0000 https://theithollow.com/?p=8509 If you’ve done any work in Amazon Web Services you probably know the importance of CloudFormation (CFn) as part of your Infrastructure as Code (IaC) strategy. CloudFormation provides a JSON or YAML formatted document which describes the AWS infrastructure that you want to deploy. If you need to re-deploy the same infrastructure across production and […]

The post An Introduction to AWS CloudFormation Change Sets appeared first on The IT Hollow.

]]>
If you’ve done any work in Amazon Web Services you probably know the importance of CloudFormation (CFn) as part of your Infrastructure as Code (IaC) strategy. CloudFormation provides a JSON or YAML formatted document which describes the AWS infrastructure that you want to deploy. If you need to re-deploy the same infrastructure across production and development environments, this is pretty easy since the configuration is in a template stored in your source control.

Now that we are deploying our infrastructure from CFn templates, we have to consider what we do when a small part of that infrastructure needs a change. Perhaps we can redeploy the entire environment, but this might not be feasible in all cases. Also, if we’re making a small change, it might take a while to redeploy everything when we really only need to tweak the settings a little.

Change Sets

Thankfully, AWS has “Change Sets” which allows us to modify an existing CloudFormation Stack with a new template. If you’re not familiar with a stack, think of this as the deployed object that comes from a CloudFormation Template. For example, if you had a template that deployed four EC2 instances, when you deploy the template, it will create a stack that represents the four servers as part of a deployment. If you delete the stack, you remove all the resources that it described.

Change Sets are created by building a new CloudFormation Template (or modifying the original) and creating a change on the original stack. You can then view the changes that will be deployed before you decide to execute the change.

Creating a Change Set

Let’s take a look at the process from the console. First we need to have a CloudFormation stack that we want to modify. In the example below, I’m going to modify a CFn stack that deployed a Lambda function and an IAM policy document. Assume that I forgot to add a permission to the policy and I want to fix that without re-deploying my whole CFn template.

 

Select the CloudFormation Stack that you want to modify.

then click the Actions drop down. Select “Create Change Set for Current Stack” from the list.

 

From this point forward, it should look a lot like a normal CloudFormation deployment if you do it from the console. The wizard that opens will ask for the template to use as the change set. Select your new CloudFormation Template. NOTE: You can use the same template that was used to deploy the resources if you want, which should re-deploy the exact same settings unless you’ve modified the template. In this case I’ve selected a newly updated template with my new IAM policy permissions.

 

On the next screen enter a change set name (Instead of a Stack name like a standard CFn deployment) and a description. Also, if you’re CFn template has input parameters, enter those here as well.

The next screen will ask for additional options such as adding any tags and specifying a role with permissions to deploy the CFn template resources.

 

On the last screen, you have an opportunity to review your settings before clicking the “Create Change Set” button.

Review the Change Set

At this point, nothing in your environment has changed yet. You created a change set, but that doesn’t deploy your code, it just stages it for the upcoming deployment. If we look at our CloudFormation Stacks again, we can select the stack we created the change set for and click the “Change Sets” tab. We’ll notice that our Change Set is listed under this tab. Click the name of the change set to open up the change set window.

If we look at the change set, we can see under the changes section what will actually be modified. In my case the Change Set will modify my Execution Role and my Lambda Function. Also, under the “Replacement” field, you’ll see False, meaning that the object doesn’t need to be replaced, it can just be edited in place by CloudFormation. Pretty neat huh? Now we can stage any changes we need for the environment ahead of time and assess the impact right from this screen. Pretty handy for System Administrators who want to get as much work done as possible before a change window starts. This is also great for figuring out what components might need a change request opened in your change management system.

Execute the Change

Now, from the change set screen, press the “Execute” button to push the code changes. If you watch your CloudFormation Stacks, you’ll notice your stack start to update.

In a moment, you’ll see the stack has been updated successfully and if we look in the change sets tab again we’ll notice that a change set has been applied to our stack. 

Summary

Change is going to happen so any Infrastructure as Code initiative needs to have a plan to handle it when those changes arise. Can you re-deploy? Should you update it manually? There are reasons that you wouldn’t want to do either of those things. Change Sets allow you to still manage your environment through CloudFormation, but make changes if they need to occur.

The post An Introduction to AWS CloudFormation Change Sets appeared first on The IT Hollow.

]]>
https://theithollow.com/2018/01/22/introduction-aws-cloudformation-change-sets/feed/ 1 8509
In the Cloud World, It’s Cheaper to Upgrade https://theithollow.com/2018/01/16/cloud-world-cheaper-upgrade/ https://theithollow.com/2018/01/16/cloud-world-cheaper-upgrade/#comments Tue, 16 Jan 2018 15:10:26 +0000 https://theithollow.com/?p=8495 If you’ve been in technology for a while, you’ve probably had to go through a hardware refresh cycle at some point. These cycles usually meant taking existing hardware, doing some capacity planning exercises and setting out to buy new hardware that is supported by the vendors. This process was usually lengthy and made CIOs break […]

The post In the Cloud World, It’s Cheaper to Upgrade appeared first on The IT Hollow.

]]>
If you’ve been in technology for a while, you’ve probably had to go through a hardware refresh cycle at some point. These cycles usually meant taking existing hardware, doing some capacity planning exercises and setting out to buy new hardware that is supported by the vendors. This process was usually lengthy and made CIOs break into a cold sweat just thinking about paying for more hardware, that’s probably just meant to keep the lights on. Whenever I first learned of a hardware refresh cycle, my first thoughts were “Boy, this sounds expensive!”

 

I’m sure that hardware vendors loved to hear about hardware refresh cycles. Sales teams knew they would probably make a new sale and better yet, the customer probably called them up to do it. Very little work on their side,but things are much different now due to so many customers moving to the cloud where the hardware is not the customer’s concern any longer.

Cloud Refresh Cycles

So hardware refresh cycles are dead now, right? I mean, you’re in the cloud now, and no longer need to worry about this kind of nonsense. Well, sorry to burst your bubble here, but they aren’t quite dead just because you’re in the cloud. Now, while capacity planning becomes much easier, and calculating budgets for new hardware is much simpler, you still need to review your infrastructure and make sure you’re up to date, but now for an entirely different reason. Saving Money!!!

 

Oh! I have your attention now huh? I thought so. In the cloud world, upgrades can save you money. Take a look at a few examples here just from the AWS platform.

I used the simple monthly calculator provided by AWS and looked up the price for a Linux instance in US-East-1 for m3.large, m4.large, and m5.large. These are similar sized instances but three different generations of hardware. M3 instances being the oldest of the three and M5 the newest of the three. What you’ll find is that the size of the instances is virtually the same, but the hardware that comprises the instances is faster. That makes pretty good sense, but also notice that the prices of those instances gets cheaper as it gets newer.

 

Instance Family and GenerationInstance SizeCPU and MemoryCPU SpeedPricing
M3Large2 vCPU / 7.5 GB MemoryHigh frequency Intel Xeon E5-2670 v2 (Ivy Bridge) processors*97.36
M4Large2 vCPU / 8 GB Memory2.3 GHz Intel Xeon® E5-2686 v4 (Broadwell) processors or 2.4 GHz Intel Xeon® E5-2676 v3 (Haswell) processors73.20
M5Large2 vCPU / 8 GB Memory2.5 GHz Intel Xeon® Platinum 8175 processors with new Intel Advanced Vector Extension (AXV-512) instruction set70.28

 

So let me reiterate that point one more time. If you upgrade from older, slower hardware to the newer faster hardware, you’ll not only be gaining performance, but you’ll be saving money while you do it. Pretty neat huh?

Let’s look at one more example. Here we show the price of S3 standard storage vs S3 Reduced Redundancy storage. Now this isn’t a generation thing, but Amazon isn’t putting a lot of effort into the Reduced Redundancy Storage option and would prefer to spend time on the standard S3 storage that is used for most workloads. Notice here that as time as gone on, Reduced Redundancy Storage is actually more expensive than the standard storage even though it provides 7 Nines less durability than standard.

 

S3 Storage TypeDurabilitySize (GB)PutsGetsPrice
Standard99.999999999%1000100010000$22.88
Reduced Redundancy99.99%1000100010000$24.00

 

Action Plan

Now that you know this, it is a good idea to have an action plan and pay attention to new releases that come out in the cloud. Sure you can keep running your workloads without messing with them until the cloud provider discontinues your generation of hardware. But you could take advantage of these new performance enhancements from better hardware while saving money along the way. Set a schedule to review your infrastructure and determine if and when you need to upgrade your own cloud infrastructure. Maybe this is a good time to leverage a partner to help education you on the changes that are constantly happening in the cloud.

How much money are you wasting?

The post In the Cloud World, It’s Cheaper to Upgrade appeared first on The IT Hollow.

]]>
https://theithollow.com/2018/01/16/cloud-world-cheaper-upgrade/feed/ 2 8495
Commit to Infrastructure As Code https://theithollow.com/2018/01/08/commit-infrastructure-code/ https://theithollow.com/2018/01/08/commit-infrastructure-code/#respond Mon, 08 Jan 2018 15:10:30 +0000 https://theithollow.com/?p=8487 Over recent years, Infrastructure as Code (IaC) has become sort of a utopian goal of many organizations looking to modernize their infrastructure. The benefits to IaC have been covered many times so I won’t go into too much detail, but the highlights include: Reproducibility of an environment Reduction in deployment time Linking infrastructure deployments with […]

The post Commit to Infrastructure As Code appeared first on The IT Hollow.

]]>
Over recent years, Infrastructure as Code (IaC) has become sort of a utopian goal of many organizations looking to modernize their infrastructure. The benefits to IaC have been covered many times so I won’t go into too much detail, but the highlights include:

  • Reproducibility of an environment
  • Reduction in deployment time
  • Linking infrastructure deployments with application deployments
  • Source control for infrastructure items
  • Reduction of misconfiguration

The reasoning behind storing all of your infrastructure as code is valid and a worthy goal. The agility, stability, and deployment speeds achieved through IaC can prove to have substantial benefits to the business as a whole.

IaC is a Commitment to the Process

Now for the bad news. If you’re going to set out on a path towards infrastructure as code, you can’t waiver on when you’ll be using it. You must commit to the use of IaC for your infrastructure tasks. Let me give you an example so that you understand why it’s important to always use IaC if the environment was created this way.

Assume you’ve built out a great Amazon Web Services environment through the use of CloudFormation (CFn) templates. These templates are either JSON or YAML documents that exist as a desired state for your deployment. Your environment includes, VPCs, subnets, encryption keys, logging standards, monitoring configurations, and several workloads. All of these items have been deployed through a CFn template and exist in your environment today. Now suppose that the application owners have identified a problem and need to make a change. Perhaps they found out that they need to have access to a server located in an ancillary data center some place to make their app work properly. Now this is an emergency request because [reasons] (There are always a list of reasons why these things are emergencies, even when they aren’t). So you deploy a VPN tunnel through the AWS console and get the application working again.

Everything in the environment is running as expected now, but there is a problem. We didn’t commit to IaC so our environment isn’t completely defined by code. You might think, big deal, but what if we need to re-deploy the environment? Assume that later we need to re-deploy the environment for a lab, or development environment. The new environment won’t have the VPN tunnel created earlier because it was done manually. So this would be a manual configuration change that would need to be made again if it wan’t added to your code. What’s worse is if you apply a change set later on to modify your configuration, you might find that it deletes your VPN tunnels because they were defined in the code. Note: this is unlikely to affect the VPN tunnel in this situation, but in other cases, it’s very likely to undo your manual configurations.

Observations

Over the past year I’ve seen companies set forth in new cloud initiatives and have a desire to do the right things like employing infrastructure as code for their deployments. But if this is the goal, team members should embrace the process of putting any configuration changes into the code base. This is a tough pill to swallow for some organizations because, just like in the situation described above, it may take longer to make a small configuration change to fix or improve a system. The benefits of having the changes committed to code are that, the system can be re-deployed from scratch very easily and version controlled if necessary.

Suggestions

First of all, decide on whether or not infrastructure as code is a worthy objective. Maybe your organization just isn’t ready to tackle this. Maybe you have a small infrastructure footprint or don’t deploy workloads very often, or maybe you just don’t have the coding skillsets yet. Maybe IaC just isn’t a good fit for you at this time.

Second, if you decide to go forward, commit to doing it for all aspects of that environment. Switching whole heartedly to IaC for everything is a pretty tough thing to do all at once, but you might have a cloud project coming up where you decide everything in that small environment will be IaC. That way you can slowly but whole heartedly commit to the concept for a smaller subset of the environment. Once you’re comfortable with that environment the coding experience you’ve gained will help with future environments and should ease the way for them.

Good luck with your coding! Thanks for reading.

The post Commit to Infrastructure As Code appeared first on The IT Hollow.

]]>
https://theithollow.com/2018/01/08/commit-infrastructure-code/feed/ 0 8487
New Opportunities in 2018 https://theithollow.com/2018/01/01/new-opportunities-2018/ https://theithollow.com/2018/01/01/new-opportunities-2018/#comments Mon, 01 Jan 2018 17:25:01 +0000 https://theithollow.com/?p=8482 It’s the beginning of a whole new year. Hopefully you’ve gotten some time off recently to recharge your batteries a bit, before heading back to the grind. While you’re getting back into the ol’ routine, maybe this is a good time to consider whether or not that routine is still worthwhile? Are you Happy With […]

The post New Opportunities in 2018 appeared first on The IT Hollow.

]]>
It’s the beginning of a whole new year. Hopefully you’ve gotten some time off recently to recharge your batteries a bit, before heading back to the grind. While you’re getting back into the ol’ routine, maybe this is a good time to consider whether or not that routine is still worthwhile?

Are you Happy With Your Job?

It’s easy to get into a funk where you roll out of bed each day to do the same task or face the same challenges over and over again. Maybe there are things in your day to day grind that you hate, but do them anyway, because it’s part of your job. No big deal, everyone has these sorts of chores. I’m sure that nobody loves every single part of their job. But if you’ve gotten a break from work and you can’t bear to think about going back to that routine, maybe that should tell you something about your job. Are you really happy doing what you’re doing, or are you doing it because it’s a steady paycheck? Are you doing it because it’s what you know, and change is hard? Are you doing it because you feel like you have no other choice?

There are a ton of reasons to stay with a company or stay in a position, but really at the most basic level, ask yourself this: “Am I happy doing what I’m doing?” If the answer is “Yes”, then take note of this answer and remember this moment, for when you have those awful days at work when you’re ready to flip over a table and walk out the door.

You’re happy with your job, you just had a bad day, now let’s move on.

Now if your answer was “No”, then start asking yourself why you’re not happy with your job and what could make your work life more enjoyable. Are you away from home too much? Are you too busy to do your job well? Are you not happy with management? Are you grumpy because of a position you’re in and not the company itself? These questions might help you figure out what actions to take to make 2018 a better year for you in your job role. Maybe you can have a discussion about a career path with your manager? Maybe you can talk to your boss about a role change so you can be home more? Maybe that even means a pay decrease, but you’d be in a happier place overall?

What Else Aren’t You Happy With?

Do you feel like others are passing you buy? Have you been slacking too much and not working on certifications needed to move you into a happier place? What things did you neglect to do last year that you can resolve to do this year? Now, when you’ve got that list, put a quick task list or a plan together to figure out how you want to achieve them. And don’t worry if you don’t achieve all of them either. Give yourself a stretch goal that might be virtually unobtainable this year. You know its a long shot to hit so if you don’t make the goal, it shouldn’t crush your spirit, but you should make a plan to get there regardless. The journey towards reaching that goal will likely give you plenty of value in other things.

For example, say you set a stretch goal of getting a VCDX this year. Maybe you won’t get it, but along the way, you figure out how to be confident in front of a room full of people, or you figure out how to build better designs because of what you’ve learned. There are tons of benefits in trying things, even if you haven’t completed them.

Remember, It’s OK to Fail

We’re not successful all of the time. It’s important to realize that just because you didn’t reach your goal doesn’t mean something awful is about to happen to you, or that you are a failure. Some of the most important things that I’ve learned came from failing to achieve a goal. Usually this comes with a certain amount of aggravation and reflection but ultimately ends up to be a good experience in the end. Sometimes there are things outside of our control that get in the way of our goals. Maybe a sick family member, maybe you realized that time spent towards training for a certification might be time away from family, and that is the thing that makes you most happy. Just remember that it’s OK to fail and it’s OK to admit that you’ve failed. Don’t take it from me:

Pass on what you have learned. Strength, mastery. But weakness, folly, failure also. Yes, failure most of all. The greatest teacher, failure is. Luke, we are what they grow beyond. That is the true burden of all masters. – Yoda

Summary

I know what you’re thinking, “I can make life decisions any time of the year and a new year has no impact on my ability to do that.” Well, if that’s what you’re thinking, then you’re right, but for many of us, the end of the year comes with some time away from our daily routines. Out of ear shot from the corporate world for even a little bit where you can see the forest for the trees. You can come up for air for a second and take a look around, and maybe you’ll see what things are really important to you.

Take a second to figure out what makes you happy, and what doesn’t. When you figure that out, then you can make your plan on what changes you want to make this year. Maybe they are just small tweaks, or maybe you’re looking for a new career. Either way, it’s nice to be able to identify a path forward even if there is a chance of failure along the way. Good luck on your 2018 adventure.

The post New Opportunities in 2018 appeared first on The IT Hollow.

]]>
https://theithollow.com/2018/01/01/new-opportunities-2018/feed/ 3 8482
Use Amazon CloudWatch Logs Metric Filters to Send Alerts https://theithollow.com/2017/12/11/use-amazon-cloudwatch-logs-metric-filters-send-alerts/ https://theithollow.com/2017/12/11/use-amazon-cloudwatch-logs-metric-filters-send-alerts/#respond Mon, 11 Dec 2017 16:14:47 +0000 https://theithollow.com/?p=8439 With all of the services that Amazon has to offer, it can sometimes be difficult to manage your cloud environment. Face it, you need to manage multiple regions, users, storage buckets, accounts, instances and the list just keeps going on. Well the fact that the environment can be so vast might make it difficult to […]

The post Use Amazon CloudWatch Logs Metric Filters to Send Alerts appeared first on The IT Hollow.

]]>
With all of the services that Amazon has to offer, it can sometimes be difficult to manage your cloud environment. Face it, you need to manage multiple regions, users, storage buckets, accounts, instances and the list just keeps going on. Well the fact that the environment can be so vast might make it difficult to notice if something nefarious is going on in your cloud. Think of it this way, if a new EC2 instance was deployed in one of your most used regions, you might see it and wonder what it was, but if that instance (or 50 instances) was deployed in a region that you never login to, would you notice that?

To mitigate against issues like this we use the AWS CloudTrail service which can log any console or API request and store those logs in S3. It can also push these logs to Amazon CloudWatch Logs which allows us to do some filtering on those logs for specific events.

This post assumes that you’ve already setup CloudTrail to push new log entries to CloudWatch Logs. Once that’s setup we’re going to go through an example to alert us whenever a new IAM user account is created by someone other than our administrator.

Create a Metric Filter on the CloudTrail Logs

Login to the AWS console and navigate to the CloudWatch Service. Once you’re in the CloudWatch console go to Logs in the menu and then highlight the CloudTrail log group. After that you can click the “Create Metric Filter” button.

In the “Filter Pattern” box we’ll select a pattern that we’re looking for. In my case I want to filter out any events where a new user account is created and the user who did it is not “ithollow”. To do that we need to use the Filter and Pattern Syntax found below.

{($.eventName = "CreateUser") && ($.userIdentity.userName != "ithollow")}

You can test the results of your filter pattern agains some of your existing logs to see what is returned. In my case I got no results because I don’t have any events like that yet in my logs. When you’re ready click the “Assign Metric” button.

 

Now you can leave the filter name as is, or use your own custom naming. Under the Metric Details a namespace will be added for use in the event that multiple logs have filters on them. And you can give the metric a name there as well. I’ve left the rest of the values as defaults. Click the “Create Filter” button.

You should be taken back to the CloudWatch Console and see that a new filter has been created.

Create an Alarm

Now that we’ve created a way to filter our logs. Lets add an alarm to notify us when these events have occurred. On the logs screen from above, click the “Create Alarm” link next to your filter. Give the alarm a name and description for easy identification later. Then set the threshold values. I’ve said, anytime this event happens more than or equal to 1 time for a single period, trigger the alarm. I also changed the setting to treat missing data as good, otherwise I will have an alarm with “insufficient data” in it all the time until one of these weird accounts shows up. So, no news is good news, in my scenario.

Lastly, under the actions section, I’ve selected my “NotifyMe” SNS topic so that it will email me when this happens.

Testing

Now that our alarms are created and metric filters configured, lets test it. I logged into the AWS account with a user that had Admin permissions that wasn’t me and created a new user. Shortly after creating the user the CloudWatch console showed an alarm and the “StrangeUserAccounts” alarm went off.

 

My SNS notification came through email and you can see that email in the screenshot below with the details.

 

Summary

This was a pretty basic example, but using CloudWatch Logs with metric filters and alarms can really help you keep you a close eye on your environment. Think of all the ways you can use CloudWatch Logs to send alerts about things in your environment that you care about.

The post Use Amazon CloudWatch Logs Metric Filters to Send Alerts appeared first on The IT Hollow.

]]>
https://theithollow.com/2017/12/11/use-amazon-cloudwatch-logs-metric-filters-send-alerts/feed/ 0 8439