The IT Hollow https://theithollow.com Tue, 22 May 2018 02:03:13 +0000 en-US hourly 1 https://wordpress.org/?v=4.9.6 44921653 Using Hashicorp Consul to Store Terraform State https://theithollow.com/2018/05/21/using-hashicorp-consul-to-store-terraform-state/ https://theithollow.com/2018/05/21/using-hashicorp-consul-to-store-terraform-state/#respond Mon, 21 May 2018 14:05:16 +0000 https://theithollow.com/?p=8843 Hashicorp’s Terraform product is very popular in describing your infrastructure as code. One thing that you need consider when using Terraform is where you’ll store your state files and how they’ll be locked so that two team members or build servers aren’t stepping on each other. State can be stored in Terraform Enterprise (TFE) or […]

The post Using Hashicorp Consul to Store Terraform State appeared first on The IT Hollow.

]]>
Hashicorp’s Terraform product is very popular in describing your infrastructure as code. One thing that you need consider when using Terraform is where you’ll store your state files and how they’ll be locked so that two team members or build servers aren’t stepping on each other. State can be stored in Terraform Enterprise (TFE) or with some cloud services such as S3. But if you want to store your state, within your data center, perhaps you should check out Hashicorp’s Consul product.

Consul might not have been specifically designed to house Terraform state files but does its built in capabilities lend itself well to doing just this. Hashicorp’s Consul product can be used as a service discovery product or key/value store. The product can also perform health checks on services, so the combination of these tools can be a great benefit to teams trying to build microservices architectures.

Setup a Consul Cluster

We don’t want to risk storing all of our Terraform state files in a single server, so we’ll deploy three Consul servers in a cluster. To do this, I’ve deployed three CentOS servers and opened the appropriate firewall ports (Yeah, I turned off the host firewall. It’s a lab.) Once the basic OS deployments are done we’ll need to download the latest version of Consul from Hashicorp’s website. I’ve copied the application over the the /opt directory on each of the three hosts and set the permissions so I could execute the application. Next we need to make sure the binary is in our PATH. on my CentOS machine I added /opt/ to my PATH. Remember to do this on each of the three servers.

export PATH=$PATH:/opt/

I’ve also added a second environment variable that enables the new Consul UI. This is optional, but I wanted to use the latest UI and Consul looks for the environment variable to enable this at this point. I expect this to change in the future.

export CONSUL_UI_BETA=true

Lastly, I’ve created a new directory that will house my config data for Consul.

sudo mkdir /etc/consul.d

Now we get to the business of setting up the cluster. On the first node, we’ll run the following command:

consul agent -server -bootstrap-expect=3 -data-dir=/tmp/consul -node=agent-one -bind=10.10.50.121 -enable-script-checks=true -ui -client 0.0.0.0 -config-dir=/etc/consul.d

Lets explain some of the switches that are happening in that last command.

 

-server – This tells Consul that this server will be acting as a server and not a client.

-bootstrap-expect=3 – This switch explains how many servers are expected to be part of the cluster.

-data-dir=/tmp/consul – This switch explains where Consul will store its data.

-node=agent-one – This switch identifies each of the nodes. This should be unique for each of the servers in the cluster.

-bind=10.10.50.121 – The address that should be bound to for internal cluster communications. This will be unique on each node in your cluster.

-enable-script-checks=true – We could omit this for this post, but status checks could be added later where this would be necessary.

-ui – This enables the UI

-client 0.0.0.0 – The address to which Consul will bind client interfaces, including the HTTP and DNS servers.

-config-dir – Determines where the config files will be located.

 

When you run the commands on the first node, you’ll start seeing log messages.

Repeat the commands on the other servers that should be part of the cluster. Be mindful to change the options that should be unique to the node such as bind and node. At this point the cluster should be up and running. To check this, open another terminal session and run consul members. You should see three members listed of type server.

If you used the -UI switch when you started up the nodes, you’ll also be able to navigate to http://node.domain.local:8500/ui and you’ll see the Consul UI. Notice that you’ll be directed to the Services where you can see node health. Again we see three nodes as healthy.

While we’re in the UI, take a second to click on the Key/Value tab. This is where Terraform will be storing its state files. Notice that at this point we don’t see anything listed, which makes sense because we haven’t created any pairs yet.

 

 

Terraform Build

This post won’t go into building your Terraform configurations but an important first step to using Consul as a state store. To do this we create a backend.tf file for Terraform that defines Consul as our store. Create a file like the one below:

terraform {
  backend "consul" {
    address = "consulnode.domain.local:8500"
    path    = "tf/state"
  }
}

Be sure to update the address to your consul cluster. Also, your path will be where we store the state for your terraform build. For this example I’ve used a single node for my address and my path is a generic tf/state path.

Once created run the Terraform init . command to initialize the backend. When you’re done, go ahead and run your terraform build as usual.

Once your build completes, you’ll notice in the Consul UI that there is a Key/Value listed now.

 

 

If we drill down into that value, you’ll see our JSON data for our state file.

If we need to programmatically access this state file, we can use the API to return the data. I’ve done this with a simple curl command.

Summary

Storing your Terraform state files in a local directory is good for a starting point, but once you start building lots of infrastructure or many teams are all working on the same infrastructure you need a better way to manage state. Consul provides a solution to take care of this for you as well as providing plenty of other useful capabilities. Check it out if you’re looking to extend your Terraform deployments beyond a simple deployment.

The post Using Hashicorp Consul to Store Terraform State appeared first on The IT Hollow.

]]>
https://theithollow.com/2018/05/21/using-hashicorp-consul-to-store-terraform-state/feed/ 0 8843
Visualizing the Chicago Cubs via Amazon QuickSight https://theithollow.com/2018/05/14/visualizing-the-chicago-cubs-via-amazon-quicksight/ https://theithollow.com/2018/05/14/visualizing-the-chicago-cubs-via-amazon-quicksight/#respond Mon, 14 May 2018 15:01:07 +0000 https://theithollow.com/?p=8802 If you’re interested in visualizing your data in easy to display graphs, Amazon QuickSight may be your solution. Obviously, Amazon has great capabilities with big data, but sometimes even if you have “little” data you just need a dashboard or way of displaying that content. This post shows an example of how you can display […]

The post Visualizing the Chicago Cubs via Amazon QuickSight appeared first on The IT Hollow.

]]>
If you’re interested in visualizing your data in easy to display graphs, Amazon QuickSight may be your solution. Obviously, Amazon has great capabilities with big data, but sometimes even if you have “little” data you just need a dashboard or way of displaying that content. This post shows an example of how you can display data to tell a compelling story. For the purposes of this blog post, we’ll try to determine why the Chicago Cubs are the Major League’s favorite baseball team.

Creating an Amazon QuickSight Account

Amazon’s QuickSight can be accessed through your existing AWS console, but when you sign up for an account you’ll notice that it redirects you to a new portal. Login to your AWS Console and look for QuickSight.

You’ll notice that QuickSight requires you to sign up for a QuickSight account. So this is a bit different from the other services that AWS provides.

This you’ll pay on a monthly basis when you create an account. This isn’t an on-demand type service where you pay for what you use. There are two options when you create an account, Standard and Enterprise and the details for those are found in the screenshot below. This blog post uses Standard cause I’m an Architect on a budget!

Once you pick your edition you’ll setup your QuickSight account information. Give it a name and notification address as well as selecting regions. You can also allow QuickSight to look at your data across RedShift, S3 etc so that you have datasets that can immediately start helping you.

 

Once you’ve got your account setup, you’re ready to start uploading data.

The Data Sets

 

Now, before you can visualize anything, it has to be based on some data. Duh, right? Amazon will give you some datasets and analysis to use right out of the starting gate, so you can see what’s possible. To do anything really useful though, you’ll want to use your own data sets to do some analysis.

QuickSight gives you a few options for data sources such as using social media or public data sets. The data sets portal shows you an example list of data sources that can immediately get you started. How cool is it that you can connect QuickSight to your Github repo to get some analytics about whats happening?

For the purposes of this post, I’ve decided to upload my own file, which I’ve downloaded from data.world. This file includes information about MLB baseball games from 2016. I’ve uploaded the CSV file through QuickSight’s interface but you can also upload TSV, JSON, or XLSX files as well as ELF/CLF for log files.

Once the data has been uploaded, you can do your fancy visualizations.

Visualizing your Data

In the QuickSight console, you can click the “New analysis” button to get started.

The first step to creating an analysis is to select the data set. This should be the data that you just uploaded or configured in the previous section.

After the data is imported, you can select the “Create Analysis” button.

 

Once you’re in the analysis dashboard you’ll see that on the left hand side, you can drag and drop your fields, filter the fields and change the visualization types for your analysis. Adding fields to your analysis is as easy as dragging and dropping your fields onto the graph.

Now you can carve up your data in any way that you see fit, but I chose to look at some interesting data to see how beloved my Cubbies really were. To start, I looked at the attendance for the away teams. My theory was, that home attendance would give you some great information about teams that people liked, but also had a problem where the size of the stadium factored in, and the social aspects of baseball that had nothing to do with the teams playing. Going to the ball park for a business event or something to that effect. The attendance for the away teams might be a better representation of who the fans wanted to see play. I’m sure that no one here doubts the results of that visualization.

Partially so I could use another visual type, and partially to put a common misconception to bed, I looked at the wind direction for the Cubs home games. It’s often been said that the Chicago Cubs hitters have a huge advantage because of how the Chicago winds carry the baseball out of the park (for a home run) more than other teams. So if we look at the wind direction per game, you’ll see that most of the time the wind is blowing in from Right field, or moving from right to left, which means that many times it would be harder to hit a home run at Wrigley Field. If you’re a left handed pull hitter, you’ll likely have to hit into the wind most days. Maybe Wrigley isn’t a hitters park after all???? I’m just kidding, it is a home run park due to the power alleys but this graph still seemed fun.

Also, if you have a bunch of graphs that you want to display at once, you can add multiple visuals and then share that out with your team. Here I’ve added three visualizations.

After which I can share them with whomever I’d like.

 

 

Creating a Story

One of the coolest things about QuickSight is the ability to tell a story. You can add multiple visualizations and have them played in a specific order so that they explain a story. As you see below I’ve taken three different visualizations and saved them as a story.

If I play them, they show up like a slide show where my reviewers just click “Next” and they go from one slide to another. If I’ve done a great job with this, my reviewer should notice that the Chicago Cubs are clearly the worlds most favorite Major League Baseball Team.

Summary

OK, the Cubs are great, but the real point of this post was to get you familiar with just a few of the things that you can do with AWS QuickSight. Being able to visualize your data sets quickly can be a huge boost to many organizations. Are you profitable? Are you reaching your social media audience? Whatever your needs, QuickSight can show you some quickly digestible information about your data. Set it up with your data sets once and check in often to see how things change, or build it once for a report and share it with your teams. What will you do with this service?

The post Visualizing the Chicago Cubs via Amazon QuickSight appeared first on The IT Hollow.

]]>
https://theithollow.com/2018/05/14/visualizing-the-chicago-cubs-via-amazon-quicksight/feed/ 0 8802
AWS IAM Indecision https://theithollow.com/2018/05/07/aws-iam-indecision/ https://theithollow.com/2018/05/07/aws-iam-indecision/#respond Mon, 07 May 2018 14:55:55 +0000 https://theithollow.com/?p=8786 Identity and Access Management (IAM) can be a confusing topic for people that are new to Amazon Web Services. There are IAM Users that could be used for authentication or solutions considered part of the AWS Directory Services such as Microsoft AD, Simple AD, or AD Connector. If none of these sound appealing, there is […]

The post AWS IAM Indecision appeared first on The IT Hollow.

]]>
Identity and Access Management (IAM) can be a confusing topic for people that are new to Amazon Web Services. There are IAM Users that could be used for authentication or solutions considered part of the AWS Directory Services such as Microsoft AD, Simple AD, or AD Connector. If none of these sound appealing, there is always the option to use Federation with a SAML 2.0 solution like OKTA, PING, or Active Directory Federation Services (ADFS). If all of these option have given you a case of decision fatigue, then hopefully this post and the associate links will help you to decide how your environment should be setup.

IAM Users

Your first option is the easiest to configure but comes with a few risks. An IAM user is a username and password that is created in the AWS IAM Console. You can give IAM Users access to login to the console or through the API, but it also means that its a separate login from the account you probably use in your corporate environment. For example, you login to your laptop every morning with a corporate Active Directory login and then go to login to the AWS console with a completely different username or password. Maybe you’ve even decided to use the same username and password that is used with your corporate AD but they aren’t sync’d so you still need to manage them separately. While IAM Users are easy to setup, they provide a few challenges for enterprises who’d like to use a single login. There are other solutions available to limit operational complexity and the number of logins managed meaning fewer attack vectors.

Directory Services

AWS also provides a service called AWS Directory Service that provides several different options for authenticating both machines and users with your environment.

Simple AD – Simple AD is an option that provides a subset of Microsoft Active Directory services and is based on Samba 4. This service deploys a pair of domain controllers, with DNS, in a VPC across a pair of subnets for availability. The solution allows you to use this new directory as a Kerberos authentication source, but be aware that this solution doesn’t allow you to create a trust relationship with your existing domain if you have one. Think of this if you plan to setup a new domain for your AWS servers to belong to, but will still be managed separate from your on-premises domain. Simple AD has two sizes where a small directory can handle around 500 users / 2000 objects and a large size can manage 5000 / 20,000 objects.

 

Microsoft AD – As you’d guess Microsoft AD provides a full blown Microsoft AD which is deployed  in a similar fashion to Simple AD. A pair of Microsoft AD servers (2012 R2 as of now) are deployed across AZs to provide redundancy. Microsoft AD has an advantage over Simple AD where you can create a trust relationship with these new domain controllers to your existing Microsoft Active Directory environment. Be aware that your directory cannot be extended to this new Microsoft AD instance, a trust relationship can be created though. Microsoft AD also comes in two sizes where standard supports 5000 users / 30,000 objects and more than this would require the Enterprise option.

 

AD Connector – If you have the need to extend your existing on-premises active directory then you could consider AD Connector. AD Connector doesn’t authenticate your users directly, but rather forwards the requests on to your on-prem AD instances. This requires network connectivity between your VPC and your on-prem domain controllers for this to work, and if you lose your connectivity, logins will fail to work.

 

Federation

If you want to user your existing Active Directory solution for a login method for the AWS console or CLI, then federation might be your best bet. With federation, you can continue to use your existing corporate logins to login to the AWS control plane. Be aware though, that AD Federation won’t do anything for your computer objects that need a domain to join when they are spun up. This just allows for console authentication only.

 

Breakdown

If you’re still unsure, perhaps this table will help illustrate the differences.

 Limits the number of user Accounts?Used for User logins and computer accounts?Ability to create a Trust with on-premises AD?Requires Network Connectivity to corporate data center?
IAM UsersNo - New Users would be created in AWS ConsoleNo - Cannot manage computer accounts.NoNo
Simple ADNo - A new directory has to be maintainedYesNoNo
Microsoft ADYes - If a trust relationship is created with on-premises AD.YesYesNo - Unless a trust relationship is created to on-premises AD
AD ConnectorYesNo - Computer accounts are not managed with AD ConnectorN/AYes
ADFSYesNo - Only User authentiationN/AYes - requires network access from the AWS Console to the ADFS server. (over public Internet)
ADFS with Deployed Domain ControllersYesYes - ADFS for console access and the DCs for users/computer accountsYes - AD can be extended or a trust relationship added.Yes - If the directory on-prem has been extended to the AWS Servers or a trust is created

 

Resources

The following resources may help you with some facets of the setup of the directory services, how federation can be used with or without role switching and general info.

Setup of Simple AD

Setup of Microsoft AD

Setup of AD Connector

Setup of ADFS with AWS

AWS Federation with Role Switching

The post AWS IAM Indecision appeared first on The IT Hollow.

]]>
https://theithollow.com/2018/05/07/aws-iam-indecision/feed/ 0 8786
Manage Multiple AWS Accounts with Role Switching https://theithollow.com/2018/04/30/manage-multiple-aws-accounts-with-role-switching/ https://theithollow.com/2018/04/30/manage-multiple-aws-accounts-with-role-switching/#respond Mon, 30 Apr 2018 14:05:52 +0000 https://theithollow.com/?p=8757 A pretty common question that comes up is how to manage multiple accounts within AWS from a user perspective. Multi-Account setups are common to provide control plane separation between Production, Development, Billing and Shared Services accounts but do you need to setup Federation with each of these accounts or create an IAM user in each […]

The post Manage Multiple AWS Accounts with Role Switching appeared first on The IT Hollow.

]]>
A pretty common question that comes up is how to manage multiple accounts within AWS from a user perspective. Multi-Account setups are common to provide control plane separation between Production, Development, Billing and Shared Services accounts but do you need to setup Federation with each of these accounts or create an IAM user in each one? That makes those accounts kind of cumbersome to manage and the more users we have the more chance one of them could get hacked.

First, lets look at to different patterns that can be used to authenticate with multiple AWS Accounts. The first method we have either an IAM User (Username and Password stored in the AWS Account IAM Service) or a Federated User (Username and Password stored in a local Identity Provider) that can login to any of the accounts in the AWS environment. For this authentication pattern, Identity Federation would need to be setup for every account with the Identity Provider, or an IAM User would need to be created for each account which means many logins to keep track of and manage. The overall pattern would look something like this:

 

 

In the second method, we’re using a gateway account to handle all of our authentication into the AWS environment meaning that a single login is required. Federated users or IAM Users would login to this gateway account first and from there would use the Switch Role feature to assume a role in another account. This pattern would look similar to this:

 

If you prefer the first option, then you have what you need and just need to setup your authentication mechanisms with each account. If you prefer option two where you authenticate against a single gateway account and role switch to your desired destination, then we should look deeper about how that role switching takes place.

 

Role Switching in the AWS Console

To role switch in the AWS Web console, you would first login to your gateway account. This is usually a shared services or security related account where centralized management of users, groups and roles can take place. From there you’ll go to the login dropdown at the top of the console an select the option “Switch Role”. The Switch Role window will pop open and ask for an account number and a role to assume when you switch accounts. You can then give it a display name for the console and a color which I’ve found really valuable but your mileage may vary. When you’re done click switch role and you’ll be switched to your destination account. You can go back to your gateway account at any time by going back to the login dropdown and clicking the “back to [username]” and you’ll role switch back to the original login.

 

Once you’ve switched roles once, the browser will cache your last five roles that have been switched and from then on, you don’t need to re-enter your account number and role. If you navigate to the login dropdown and select one of your cached roles, you’ll be able to more quickly switch between accounts going forward, until you delete your browser cache or switch roles to more than five different accounts.

Switch Roles in the AWS CLI

First, lets look at switching roles if we login to the AWS CLI as an IAM User. Once you setup your AWS CLI you’ll have your credentials stored in the .aws/credentials file which includes your access keys and secret keys to log you into your accounts. If you execute a command you’ll receive responses related to the default account that was setup.

You can also modify the .aws/config file to include any roles that you might want to role switch into. To do this, you would give the profile a name and then specify the role_arn of the role that you’d be switching into as well as the profile that would be allowed to switch from.

When it’s all done, you can run the same commands you normally would, but specify the –profile [profile name] command and the cli will run the command in the correct account. Below is an example of two identidal commands that are in different aws accounts specified by the profile switch.

If you’re company requires you to federate, this gets slightly more difficult because now you need to login to your federation server and receive a token, which is passed to AWS for authentication. There is a great tutorial on the AWS blog on how to use python to do this with ADFS 1.0, 2.0, and 3.0 found here: https://aws.amazon.com/blogs/security/how-to-implement-a-general-solution-for-federated-apicli-access-using-saml-2-0/ and if you need to do this, I urge you to read this thoroughly. When you’ve implemented the scripts, you’ll have a similar login process but the first federated login will update your .aws/credentials file to use your temporary token for login and then once thats complete, you can role switch like we did before.

 

Setup Role Switching

For cross account role switching to work properly, you must setup some configurations in both the source account (the account you’ll be logging into and switching from) and the destination account (the account that you’ll role switch into).

First we’ll start with the destination account or the account that you’ll role switch into. The goal here is to create a role that other accounts have permissions to assume. In this example I’m creating a role named “Admins” and I’m going to allow my source account access to assume that role.

Open the IAM console and go to Roles. Click the “Create role” button.

Under the type of trusted identity select the “Another AWS account” option. From there you’ll need to enter in the source account number that will have access to assume this role in this destination account. You could also require MFA or an external ID but that is not covered in this blog post. External ID cannot currently be used through the console so be aware of that. Click the “Next:Permissions” button.

On the the attach permissions policies screen, select what permissions this role will have on the account. I’m assuming my administrators are using this account so I’ve given full access. Click the “Next:Review” button.

On the review screen, give the role a name. and complete the setup. Be sure to remember the name of this role as it will be needed in a future step.

 

Next we’ll move to the source account or the gateway account that logins are funneled through. So login to the source account and go to the IAM console again. The main task here is that you must add a permission to the users who will have access to the destination accounts. This is done by creating a new policy allowing the assume role permission on the user, group or role that is being provided access.

Create a new policy and add the JSON code from the screenshot. The important part to edit here is the destination account name, and the role that was created.

The JSON starter is here so you can copy and paste to get started.

{
    "Version": "2012-10-17",
    "Statement": {
        "Effect": "Allow",
        "Action": "sts:AssumeRole",
        "Resource": [
            "arn:aws:iam::[YOURACCTNUMBER]:role/[YOURROLENAME]"
        ]
    }
}

The next step would be to attach this policy to a user (less preferred) or group (more preferred) that will have access to assume the role in the destination account. I prefer to create a group for each of my accounts and attach a policy specifically to that account as seen in the screenshots below.

 

Summary

There are several ways that your authentication mechanisms can be architected and you should consider the options from both a security and manageability perspective. Is it easier to manage multiple federated accounts or a single federated account that allows you to switch from another role? Is it more difficult to get new IAM roles created for new accounts or re-setup federation when you onboard a new account? Is it too much hassle to login and then role switch before doing work? These are all good questions that should be considered before building out your environment on AWS.

 

 

The post Manage Multiple AWS Accounts with Role Switching appeared first on The IT Hollow.

]]>
https://theithollow.com/2018/04/30/manage-multiple-aws-accounts-with-role-switching/feed/ 0 8757
AWS Directory Service – AD Connector https://theithollow.com/2018/04/23/aws-directory-service-ad-connector/ https://theithollow.com/2018/04/23/aws-directory-service-ad-connector/#respond Mon, 23 Apr 2018 14:05:05 +0000 https://theithollow.com/?p=8731 Just because you’ve started moving workloads into the cloud, doesn’t mean you can forget about Microsoft Active Directory. Many customers simply stand up their own domain controllers on EC2 instances to provide domain services. But if you’re moving to AWS there are also some great services you can take advantage of, to provide similar functionality. […]

The post AWS Directory Service – AD Connector appeared first on The IT Hollow.

]]>
Just because you’ve started moving workloads into the cloud, doesn’t mean you can forget about Microsoft Active Directory. Many customers simply stand up their own domain controllers on EC2 instances to provide domain services. But if you’re moving to AWS there are also some great services you can take advantage of, to provide similar functionality. This post focuses on AD Connector which makes a connection to your on-premises or EC2 installed domain controllers. AD Connector doesn’t run your Active Directory but rather uses your existing active directory intances within AWS. As such, in order to use AD Connector you would need to have a VPN connection or Direct Connect to provide connectivity back to your data center. Also, you’ll need to be prepared to have credentials to connect to the domain. Domain Admin credentials will work, but as usual you should use as few privileges as possible so delegate access to a user with the follow permissions:

  • Read users and groups
  • Create computer objects
  • Join computers to the domain

Deploy

To deploy AD Connector within your existing AWS VPCs, go to the Directory Service from the services menu.

When the Directory Service page opens up you’ll see several options available to you, but for this post, choose AD Connector.

To setup a new directory, first enter the AD DNS Name for the AD Domain you’ll be connecting with. You can optionally provide a NetBIOS name if necessary. Next, enter a username and a password for a user that has permissions that we discussed above. After this, you’ll need to specify the DNS address for your domain. This should be the IP Address of your DNS Servers which in my case are also my domain controllers. You’ll also need to decide which VPC your AD Connectors will live in, and which subnets. Rememeber that these subnets need to be able to communicate with your existing AD instances so if they are on-premises you’ll need a VPN or Direct Connect. If they live within your AWS environment, make sure that those subnets can communicate with the ones specified in this window.

 

The next screen shows you a review before you deploy. If it looks good, click the “Create AD Connector” button for the magic to happen in the background.

You should see a green status message stating that the magic is happening.

 

It will take a bit to deploy but when done you’ll see a new directory listed in your portal. Select the directory that was created and you’ll see some information needed for the rest of this post. Specifically, you’ll want to take note of the “On-premises DNS Address” listed in the details page for the following section.

 

If you were looking for how to do this through CloudFormation, then this post isn’t your friend. I also prefer to do everything through CloudFormation when possible, but found no documentation for completing this task through CFn. If you find the answers please post in the comments and I’ll update the post.

 

Modify DHCP Option Sets

You’ve connected your domain controllers with AWS now, but your clients will need to be reconfigured to use these two domain controllers for their DNS resolution. To provide this for the entire VPC, we’ll want to create a new DHCP Option Set and assign it to the VPC that will use these Domain Controllers. Go to your VPC menu in the console and find the DHCP Options Set link. Create a new option set with the Domain name and DNS servers from your new on-premises AD servers that we just connected.

 

Once you’ve created the options set you’ll need to associate it with your VPC(s) so that new addresses are handed out with the appropriate settings. NOTE: You can only have one DHCP option set associated with at VPC at a time. To assign the new Option Set, select the VPC from the VPC menu and click the actions button, then select the Edit DHCP Options Set link. You’ll then have a drop down to select your preferred option set.

 

Configure Roles

Before we start deploying member servers, we’ll need to create a role in the IAM console. This role will allow Simple Systems Manager (SSM or Systems Manager for short) the permission to join new EC2 instances to the new domain. To create this role, go to the IAM console and click on Roles. Click the “Create role” button.

 

When the create role window opens up select “AWS service” and then select EC2 under the service that will use the role. Click the “Next:permissions” button to continue.

In the permissions screen search for AmazonEC2RoleforSSM and select it. Click the “Next:Review” button.

Review the screen and give the role a name before click the “Create role” button.

 

Auto-Join to the Domain

Now that your directory is setup, you can have new Windows only EC2 instances automatically join your domain when they are created. To do this it uses the EC2DomainJoin role we created earlier. To test this deploy a new EC2 instance into the VPC you used with the AD Connector. When you get to the “Configure Instance” stage of deployment, you’ll need to ensure that a few new settings are configured. Ensure that the “Domain join directory” is your new directory service and you assign the EC2DomainJoin role to the instance at creation.

After your done deploying you should see your computer object in your on-premises Active Directory console.

 

Use AD Connector to Authenticate to the AWS Management Console

You can use the AD Connector to do more things in AWS such as use your on-premises domain to authenticate to the console. This limits the number of IAM users needed to be crated in the AWS console and hopefully helps to protect the environment even further.

First, we create an endpoint so that the AWS services can access the directory. Enter a name for the endpoint and click the “Create Access URL”.

 

Click “Continue” to proceed with creating an endpoint. Notice that you can’t change it later. Click Continue. There are other services integrated with AWS Directory Services but for this example, we’ll just use the Management Console. Navigate back to your directory service details and look towards the bottom of the screen under AWS apps & services. Click the AWS Management Console. When the new window opens click the “Enable Access” button.

Before the users and groups within AD can login to the console with their AD credentials, another Role needs to be created to provide access to the console. Go to the IAM console again and create another role. This time when you create a new role, choose the Directory Service as the service that will use the role.

You don’t need to assign any additional permissions (at this time) since we’re only demonstrating that this role can be used to authenticate. If you plan to use this role for users to have permissions to use anything in the console, those permissions need to be added. On the last step, give the role a name.

Once you’ve created the role, go back to your directory and click the Management Console Access link.

From here you’ll see a section for Users and Groups to Roles. A single Role will be listed which is what was just built in the previous few steps. Click the role to assign users from your on-premises domain.

In the Add Users and Groups to Role window type a name. I chose the my own AD user account.

When done you’ll see your user(s) added to the directory.

Now, if you go to your endpoint URL (hint, the link is located next to the Management Console in your directory) you’ll be taken to a login page. Enter the Username and Password of the user that you added, and you’ve used your new Microsoft AD service and your directory store for the AWS Management Console.

Summary

Congratulations on setting up AD Connector. You can how use your existing Active Directory environment to login to the AWS Console, and automatically have new Windows instances joined to the domain for you.

 

The post AWS Directory Service – AD Connector appeared first on The IT Hollow.

]]>
https://theithollow.com/2018/04/23/aws-directory-service-ad-connector/feed/ 0 8731
AWS Directory Service – Simple AD https://theithollow.com/2018/04/16/aws-directory-service-simple-ad/ https://theithollow.com/2018/04/16/aws-directory-service-simple-ad/#comments Mon, 16 Apr 2018 14:12:58 +0000 https://theithollow.com/?p=8708 Just because you’ve started moving workloads into the cloud, doesn’t mean you can forget about Microsoft Active Directory. Many customers simply stand up their own domain controllers on EC2 instances to provide domain services. But if you’re moving to AWS, there are also some great services you can take advantage of to provide similar functionality. […]

The post AWS Directory Service – Simple AD appeared first on The IT Hollow.

]]>
Just because you’ve started moving workloads into the cloud, doesn’t mean you can forget about Microsoft Active Directory. Many customers simply stand up their own domain controllers on EC2 instances to provide domain services. But if you’re moving to AWS, there are also some great services you can take advantage of to provide similar functionality. This post focuses on Simple AD is based on Samba4 and handles a subset of the features that the Microsoft AD type Directory Service provides. This service still allows you to use Kerberos authentication and manage users and computers as well as provide DNS services. One of the major differences between this service and Microsoft AD is that you can’t create a trust relationship with your existing domain, so if you need that functionality look at Microsoft AD instead. Simple AD gives you a great way to quickly stand up new domains and cut down on the things you need to manage such as OS patches, etc.

Deploy

To deploy Simple AD within your existing AWS VPCs, go to the Directory Service from the services menu.

When the Directory Service page opens up you’ll see several options available to you, but here we’ll stick with Simple AD. Locate Simple AD and click the “Set up directory” link.

 

First, enter a Directory DNS. This is a FQDN for your environment. I use “hollow.local” for my on-prem domain so I like to use something like sbx1.hollow.local for my sandbox cloud environment. You can optionally provide a NetBIOS name if necessary. Next, enter an administrator password. This will be your domain admin password and you’ll need this later to configure the infrastructure.

Next select a size. Simple AD comes in two sizes and the main difference is the number of objects the directory can manage. Small can handle about 500 users or 2000 objects and Large supports up to 5000 users or 20,000 objects. If you need more than this, consider Microsoft AD instead of Simple AD.

Lastly, select the VPC that a pair of domain controllers will be deployed in, and then select which subnets they should live in. Private subnets make a good location for this as most people I know don’t allow access to their domain controllers from over the Internet. Click the “Next” button.

The next screen shows you a review before you deploy. If it looks good, click the “Create Simple AD” button for the magic to happen in the background.

Once done you’ll get a status message that the directory is being created.

 

If you aren’t all about deploying this through the console, Simple AD can be deployed through CloudFormation so you can have even more Infrastructure as Code (IaC). Here is a quick snippet for doing the steps above through a CloudFormation Template in JSON format.

{
"AWSTemplateFormatVersion" : "2010-09-09",
"Description": "Simple AD Service",


    "Parameters" : {

        "SimpleADPW" : {
          "Type": "String"
        },

        "subnetID1": {
            "Description": "Subnet ID to provision instance in",
            "Type": "AWS::EC2::Subnet::Id",
            "Default": ""
        },

        "subnetID2": {
            "Description": "Subnet ID to provision instance in",
            "Type": "AWS::EC2::Subnet::Id",
            "Default": ""
        },

        "VPC": {
            "Description": "The VPC to deploy resources into",
            "Type": "AWS::EC2::VPC::Id",
            "Default": ""
        },

        "DirectoryName" : {
            "Description" : "Unique Name for Directory",
            "Type": "String"
        },

        "ADSize" : {
            "Description" : "AD Directory Size",
            "Type" : "String",
            "AllowedValues": [
                "Small",
                "Large"
            ]
        }

    },


    "Resources": {
        "myDirectory" : {
          "Type" : "AWS::DirectoryService::SimpleAD",
          "Properties" : {
            "Name" : { "Ref" : "DirectoryName"},
            "Password" : { "Ref" : "SimpleADPW" },
            "VpcSettings" : {
              "SubnetIds" : [ { "Ref" : "subnetID1" }, { "Ref" : "subnetID2" } ],
              "VpcId" : { "Ref" : "VPC" }
            },
            "Size" : { "Ref" : "ADSize"}
          }
        }
    }
}

Whichever deployment method you choose, it will take a bit to deploy but when done you’ll see a new directory listed in your portal. Select the directory that was created and you’ll see some information needed for the rest of this post. Specifically, you’ll want to take note of the DNS Address listed in the details page for the following section.

Modify DHCP Option Sets

You’ve deployed your domain controllers, but your clients will need to be reconfigured to use these two domain controllers for their DNS resolution. To provide this for the entire VPC, we’ll want to create a new DHCP Option Set and assign it to the VPC or VPCs that will use these Domain Controllers. Go to your VPC menu in the console and find the DHCP Options Set link. Create a new option set with the Domain name and DNS servers from your new Simple AD servers that we just created.

Once you’ve created the options set you’ll need to associate it with your VPC(s) so that new addresses are handed out with the appropriate settings. NOTE: You can only have one DHCP option set associated with at VPC at a time. To assign the new Option Set, select the VPC from the VPC mentu and click the actions button, then select the Edit DHCP Options Set link. You’ll then have a drop down to select your preferred option set.

 

As you can see my options set is applied to my VPC now.

 

Configure Roles

Before we start deploying member servers, we’ll need to create a role in the IAM console. This role will allow Simple Systems Manager (SSM or Systems Manager for short) the permission to join new EC2 instances to the new domain. To create this role, go to the IAM console and click on Roles. Click the “Create role” button.

 

When the create role window opens up select “AWS service” and then select EC2 under the service that will use the role. Click the “Next:permissions” button to continue.

In the permissions screen search for AmazonEC2RoleforSSM and select it. Click the “Next:Review” button.

Review the screen and give the role a name before click the “Create role” button.

 

Configure Management Hosts

You’re ready to go, but there isn’t an interface within the AWS console for you to create new users, groups etc like you normally would with Active Directory. This is a normal AD setup though so to manage our AD infrastructure we need to deploy a member server and then install our AD tools on it. So first, lets install a new member server that is joined to our new domain.

Deploy a new EC2 instance with a Windows server 2016 operating system on it as you normally would. But take notice that the console has a pair of subtle changes that need to be set as we deploy. In step 3 – Configure Instance you’ll see that we need to select the “Domain join directory” setting which should show as our new domain. Also, in the IAM role we need to select the role we created in the previous section. This is critical so that the machine can be joined to the domain as its deployed. Finish deploying your server.

Once the server has been deployed, it will restart to join the domain so wait a bit before trying to login to it. When its finished being deployed, connect to the instance over Remote Desktop and login with a domain user account. Up to this point the only user that has been created is “administrator” and the password you specified. Login to the member server and install the Lightweight Directory Service tools from Server Manager.

 

After the tools are installed, you’ll see your Active Directory tools like you’re accustomed to seeing. If you look in Active Directory Users and Computers (ADUC) you’ll notice some interesting things. Under the Domain Controllers Folder, two DCs will be listed in this folder for the Simple AD servers. These are the two DCs deployed for you through the AWS service.

Also, if you look in your Computers folder under aws, your member server will be listed.

 

Use Simple AD to Authenticate to the AWS Management Console

You can use Simple AD to do more things in AWS such as use your new domain to authenticate to the console. This limits the number of IAM users needed to be crated in the AWS console and hopefully helps to protect the environment even further.

First, we create an endpoint so that the AWS services can access the new directory. Enter a name for the endpoint and click the “Create Access URL”.

 

 

Click Continue to proceed with creating an endpoint. Note that you can’t change it later. Click Continue.

 

There are other services integrated with Simple AD but for this example, we’ll just use the Management Console. Navigate back to your directory service details and look towards the bottom of the screen under AWS apps & services. Click the AWS Management Console. When the new window opens click the “Enable Access” button.

 

Before the users and groups within AD can login to the console with their AD credentials, another Role needs to be created to provide access to the console. Go to the IAM console again and create another role. This time when you create a new role, choose the Directory Service as the service that will use the role.

You don’t need to assign any additional permissions (at this time) since we’re only demonstrating that this role can be used to authenticate. If you plan to use this role for users to have permissions to use anything in the console, those permissions need to be added. On the last step, give the role a name.

Once you’ve created the role, go back to your directory and click the Management Console Access link.

From here you’ll see a section for Users and Groups to Roles. A single Role will be listed which is what was just built in the previous few steps. Click the role to assign users from your Simple AD domain.

In the Add Users and Groups to Role window type a name. I added a new AD user for my own account in this example.

When done you’ll see your user(s) added to the directory.

Now, if you go to your endpoint URL (hint, the link is located next to the Management Console in your directory) you’ll be taken to a login page. Enter the Username and Password of the user that you added, and you’ve used your new Microsoft AD service and your directory store for the AWS Management Console.

Summary

You should have a working Simple AD service up and running in your AWS account and can now manage users in much the same way you’ve always managed them in AD. Now that you’ve got your domain working correctly, you can go about building all those apps you’ve been dying to get to in your cloud. And now they’ll have an authentication method that is secure and familiar to you but won’t have to worry about those pesky servers being patched, and managed. Happy coding!

 

The post AWS Directory Service – Simple AD appeared first on The IT Hollow.

]]>
https://theithollow.com/2018/04/16/aws-directory-service-simple-ad/feed/ 1 8708
AWS Directory Service – Microsoft AD https://theithollow.com/2018/04/09/aws-directory-service-microsoft-ad/ https://theithollow.com/2018/04/09/aws-directory-service-microsoft-ad/#comments Mon, 09 Apr 2018 14:55:20 +0000 https://theithollow.com/?p=8667 Just because you’ve started moving workloads into the cloud, doesn’t mean you can forget about Microsoft Active Directory. Many customers simply stand up their own domain controllers on EC2 instances to provide domain services. But if you’re moving to AWS there are also some great services you can take advantage of, to provide similar functionality. […]

The post AWS Directory Service – Microsoft AD appeared first on The IT Hollow.

]]>
Just because you’ve started moving workloads into the cloud, doesn’t mean you can forget about Microsoft Active Directory. Many customers simply stand up their own domain controllers on EC2 instances to provide domain services. But if you’re moving to AWS there are also some great services you can take advantage of, to provide similar functionality. This post focuses on Microsoft AD which is a Server 20012 R2 based domain that provides a pair of domain controllers across Availability Zones and also handles DNS. This service is the closest service to a full blow Active Directory that you’d host on premises. You can even create a trust between the Microsoft AD deployed in AWS and your on-prem domain. You cannot extend your on-premises domain into Microsoft AD at the time of this writing though. If you wish to extend your existing domain, you should consider building your own DCs on EC2 instances and then you have full control over your options.

Microsoft AD gives you a great way to quickly stand up new domains and cut down on the things you need to manage such as OS patches, etc but you will have to put up with some limitations for this ease of use. For example, right now it’s only Server 2012 R2 domain functional level, so if you’re already at Server 2016 in your existing domain, you’ll have to use a downgraded version in AWS if you chose this solution.

Deploy

To deploy Microsoft AD within your existing AWS VPCs, go to the Directory Service from the services menu.

When the Directory Service page opens up you’ll see several options available to you, but here we’ll stick with Microsoft AD. Location Microsoft AD and click the “Set up directory” button.

To setup a new directory, first pick an edition. This decision comes down to how big your directory will be. If you need to support more than 5,000 employees or 30,000 managed objects, then you should pick Enterprise (and you’ll pay more for it) but otherwise Standard should be sufficient.

After this, enter a Directory DNS. This is a FQDN for your environment. I use “hollow.local” for my on-prem domain so I like to use something like aws.hollow.local for my cloud environment. You can optionally provide a NetBIOS name if necessary.

Next, enter an administrator password. This will be your domain admin password and you’ll need this later to configure the infrastructure.

Lastly, select the VPC that a pair of domain controllers will be deployed in, and then select which subnets they should live in. Private subnets make a good location for this as most people I know don’t allow access to their domain controllers from over the Internet. Click the “Next” button.

The next screen shows you a review before you deploy. If it looks good, click the “Create Microsoft AD” button for the magic to happen in the background.

If you aren’t all about deploying this through the console, Microsoft AD can be deployed through CloudFormation so you can have even more Infrastructure as Code (IaC). Here is a quick snippet for doing the steps above through a CloudFormation Template in JSON format.

{
"AWSTemplateFormatVersion" : "2010-09-09",
"Description": "Microsoft Directory Service",


    "Parameters" : {

        "MicrosoftADPW" : {
          "Type": "String"
        },

        "subnetID1": {
            "Description": "Subnet ID to provision instance in",
            "Type": "AWS::EC2::Subnet::Id",
            "Default": ""
        },

        "subnetID2": {
            "Description": "Subnet ID to provision instance in",
            "Type": "AWS::EC2::Subnet::Id",
            "Default": ""
        },

        "VPC": {
            "Description": "The VPC to deploy resources into",
            "Type": "AWS::EC2::VPC::Id",
            "Default": ""
        },

        "DirectoryName" : {
            "Description" : "Unique Name for Directory",
            "Type": "String"
        }

    },


    "Resources": {
        "myDirectory" : {
          "Type" : "AWS::DirectoryService::MicrosoftAD",
          "Properties" : {
            "Name" : { "Ref" : "DirectoryName"},
            "Password" : { "Ref" : "MicrosoftADPW" },
            "VpcSettings" : {
              "SubnetIds" : [ { "Ref" : "subnetID1" }, { "Ref" : "subnetID2" } ],
              "VpcId" : { "Ref" : "VPC" }
            }
          }
        }
    }
}

Whichever deployment method you choose, it will take a bit to deploy but when done you’ll see a new directory listed in your portal. Select the directory that was created and you’ll see some information needed for the rest of this post. Specifically, you’ll want to take note of the DNS Address listed in the details page for the following section.

Modify DHCP Option Sets

You’ve deployed your domain controllers, but your clients will need to be reconfigured to use these two domain controllers for their DNS resolution. To provide this for the entire VPC, we’ll want to create a new DHCP Option Set and assign it to the VPC or VPCs that will use these Domain Controllers. Go to your VPC menu in the console and find the DHCP Options Set link. Create a new option set with the Domain name and DNS servers from your new Microsoft AD servers that we just created.

Once you’ve created the options set you’ll need to associate it with your VPC(s) so that new addresses are handed out with the appropriate settings. NOTE: You can only have one DHCP option set associated with at VPC at a time. To assign the new Option Set, select the VPC from the VPC mentu and click the actions button, then select the Edit DHCP Options Set link. You’ll then have a drop down to select your preferred option set.

 

As you can see my options set is applied to my VPC now.

 

Configure Roles

Before we start deploying member servers, we’ll need to create a role in the IAM console. This role will allow Simple Systems Manager (SSM or Systems Manager for short) the permission to join new EC2 instances to the new domain. To create this role, go to the IAM console and click on Roles. Click the “Create role” button.

 

When the create role window opens up select “AWS service” and then select EC2 under the service that will use the role. Click the “Next:permissions” button to continue.

In the permissions screen search for AmazonEC2RoleforSSM and select it. Click the “Next:Review” button.

Review the screen and give the role a name before click the “Create role” button.

 

Configure Management Hosts

You’re ready to go, but there isn’t an interface within the AWS console for you to create new users, groups etc like you normally would with Active Directory. This is a normal AD setup though so to manage our AD infrastructure we need to deploy a member server and then install our AD tools on it. So first, lets install a new member server that is joined to our new domain.

Deploy a new EC2 instance with a Windows server 2016 operating system on it as you normally would. But take notice that the console has a pair of subtle changes that need to be set as we deploy. In step 3 – Configure Instance, you’ll see that we need to select the “Domain join directory” setting which should show as our new domain. Also, in the IAM role we need to select the role we created in the previous section. This is critical so that the machine can be joined to the domain as its deployed. Finish deploying your server.

Once the server has been deployed, it will restart to join the domain so wait a bit before trying to login to it. When its finished being deployed, connect to the instance over Remote Desktop and login with a domain user account. Up to this point the only user that has been created is “Admin” and the password you specified. Login to the member server and install the domain controller tools. The command below should install everything you need if you run it from a PowerShell console.

Install-WindowsFeature -Name GPMC,RSAT-AD-PowerShell,RSAT-AD-AdminCenter,RSAT-ADDS-Tools,RSAT-DNS-Server

After the tools are installed, you’ll see your Active Directory tools like you’re accustomed to seeing. If you look in Active Directory Users and Computers (ADUC) you’ll notice some interesting things. Under the Domain Controllers Folder, two DCs will be listed and are Global Catalog. These are the two DCs deployed for you through the AWS service.

If you look further, you’ll see an aws folder which as Computers and Users in it. You’ll see your Admin account listed here with a note to not delete it.

Also, if you look in your Computers folder under aws, your member server will be listed.

Generally, the default folder named “Computers” under the root is where member servers are listed, but this is not the case for the Microsoft AD service.

 

Use Microsoft AD to Authenticate to the AWS Management Console

You can use Microsoft AD to do more things in AWS such as use your new domain to authenticate to the console. This limits the number of IAM users needed to be crated in the AWS console and hopefully helps to protect the environment even further.

First, we create an endpoint so that the AWS services can access the new directory. Enter a name for the endpoint and click the “Create Access URL”.

 

 

Click Continue to proceed with creating an endpoint. Note that you can’t change it later. Click Continue.

 

There are other services integrated with Microsoft AD but for this example, we’ll just use the Management Console. Navigate back to your directory service details and look towards the bottom of the screen under AWS apps & services. Click the AWS Management Console. When the new window opens click the “Enable Access” button.

Before the users and groups within AD can login to the console with their AD credentials, another Role needs to be created to provide access to the console. Go to the IAM console again and create another role. This time when you create a new role, choose the Directory Service as the service that will use the role.

You don’t need to assign any additional permissions (at this time) since we’re only demonstrating that this role can be used to authenticate. If you plan to use this role for users to have permissions to use anything in the console, those permissions need to be added. On the last step, give the role a name.

Once you’ve created the role, go back to your directory and click the Management Console Access link.

From here you’ll see a section for Users and Groups to Roles. A single Role will be listed which is what was just built in the previous few steps. Click the role to assign users from your Microsoft AD domain.

In the Add Users and Groups to Role window type a name. I chose the “Admin” account because I didn’t bother creating any new users. [This blog post is long enough already!]

When done you’ll see your user(s) added to the directory.

Now, if you go to your endpoint URL (hint, the link is located next to the Management Console in your directory) you’ll be taken to a login page. Enter the Username and Password of the user that you added, and you’ve used your new Microsoft AD service and your directory store for the AWS Management Console.

Summary

If you’re still reading this, I commend you but you should have a working Microsoft AD service up and running in your AWS account and can now manage users in much the same way you’ve always managed them in AD. Now that you’ve got your domain working correctly, you can go about building all those apps you’ve been dying to get to in your cloud. And now they’ll have an authentication method that is secure and familiar to you but won’t have to worry about those pesky servers being patched, and managed. Happy coding!

 

The post AWS Directory Service – Microsoft AD appeared first on The IT Hollow.

]]>
https://theithollow.com/2018/04/09/aws-directory-service-microsoft-ad/feed/ 4 8667
Protect Your AWS Accounts with GuardDuty https://theithollow.com/2018/04/02/protect-your-aws-accounts-with-guardduty/ https://theithollow.com/2018/04/02/protect-your-aws-accounts-with-guardduty/#respond Mon, 02 Apr 2018 14:05:29 +0000 https://theithollow.com/?p=8651 Locking down an AWS environment isn’t really that if you know what threats you’re protecting against. You have services such as the Web Application Firewall, Security Groups, Network Access Control Lists, Bucket Policies and the list goes on. But many times you encounter threats from malicious attackers just trying to probe which vulnerabilities might exist […]

The post Protect Your AWS Accounts with GuardDuty appeared first on The IT Hollow.

]]>
Locking down an AWS environment isn’t really that if you know what threats you’re protecting against. You have services such as the Web Application Firewall, Security Groups, Network Access Control Lists, Bucket Policies and the list goes on. But many times you encounter threats from malicious attackers just trying to probe which vulnerabilities might exist in your cloud. AWS has built a service, called Amazon GuardDuty, to help monitor and protect your environment that is based on AWS machine learning tools and threat intelligence feeds. GuardDuty currently reads VPC Flow Logs (used for network traffic analysis) and CloudTrail Logs (used for control plane access analysis) along with DNS log data to protect an AWS environment. GuardDuty will use threat intelligence feeds to alert you when your workloads may be communicating with known to be malicious IP Addresses and can alert you when privileged escalation occurs as part of its machine learning about suspicious patterns.

At this point you’re probably thinking that Amazon GuardDuty sounds like a pretty useful tool but have two big questions: How much does it cost? How difficult is it to use?

Pricing

Amazon GuardDuty is a fairly new service and that comes with some benefits. Primarily, this gives you a 30 day free trial on your usage of GuardDuty on a new account. This gives you an opportunity to kick the tires before you decide to start paying for it for real. One of the things I really like about this trial is that the console will show you exactly where you are at during the trial period and how much it would cost if you weren’t in a trial period.

OK, so what are the prices after the trial period ends? Well just like most services it varies based on your region. If you’re in the us-east-1 region you can expect to pay about 1$ per GB of VPC Flow logs that are analyzed up to 500 GB. After that the price falls to $.50 per GB for the next 2000 GB and then the price falls again to $.25 from there after. Like most services the prices drop based on the volume. For CloudTrail log analysis you’ll expect to pay $4.00 for 1 Million CloudTrail requests.

As you can see the pricing is pretty reasonable. In a small environment or lab, you’ll probably expect to pay a couple bucks per month for the service while a larger environment obviously will pay more, but probably worth it for the added security it would bring to a production workload.

Setup GuardDuty

OK, so you’re sold on how Guard Duty works, and cool with the pricing. But machine learning and thread intelligence sounds tricky to manage. How hard is it to setup? First, lets take a look at setting this up through the console.

Go to the Amazon GuardDuty service from your list and you’ll get the familiar “Get started” screen since you’ve never set it up before. Click the “Get started” button.

On the first screen, you can view the role permissions, but if you’re good with those click the “Enable GuardDuty” button at the bottom right hand corder of the screen.

Well, that’s it. You’ve enabled GuardDuty and it can help protect you now. This does assume that you’ve setup VPC Flow logs and CloudTrail logs for your environment already but CloudTrail for example is now enabled by default. You did it!

From here, you can look at any findings that GuardDuty came back with. Initially, you won’t have any, but as time goes on GuardDuty will list potential issues in this screen so that you may take actions on them.

 

 

Additional Configurations

So if you want to do some more optional configurations you can look at the GuardDuty portal and configure some of your own things. First, if we look at the lists menu within GuardDuty, you can add a trusted IP or a threat list for unwanted traffic. Maybe you want to make sure to whitelist your corporate networks, or add known bot networks to a threat list for notification.

If you have a multiple account environment, you can setup GuardDuty to use Master and Member account relationships. For instance maybe you have an AWS Security account used for logging, patching, authorization and other security related solutions. Maybe this account gets setup as a Master GuardDuty account and your other accounts are Member accounts. This lets those Member accounts send their GuardDuty data to a single account where it can be analyzed across all your environments.

If your just dying to learn what kind of things might show up in your findings window, you can force sample findings to show up. This will put 1 entry for each type of finding in your findings pane with a [SAMPLE] tag so you know it isn’t real.

Once you’ve added the sample findings, you can click into each of them to get additional details such as the Portscan entry below.

Alerting and Auto Remediation

GuardDuty is pretty cool, but most people don’t want to continuously check on those findings. Luckily, CloudWatch Event rules can be integrated to take action based on a new finding. From the CloudWatch portal go to Events –> Rules and add a new source of GuardDuty and an Event Type of GuardDuty Finding. From there you can specify a target such as an SNS topic for email alerts, or a Lambda Function so that you can have a script auto-remediate your environment based on a finding that occurs.

Setup with CloudFormation

The GuardDuty setup with CloudFormation is also really simple. Below is an example of setting up GuardDuty with a new account and it also creates an SNS Topic and a subscription to that topic so that new findings are automatically triggering email notifications.

{
    "AWSTemplateFormatVersion": "2010-09-09",
    "Description": "GuardDuty with CloudWatch Event Rule and SNS Topic",


    "Parameters": {

        "Environment": {
            "Type": "String"
        },

        "EmailAddress" : {
          "Type": "String"
        }

    },

    "Resources": {

      "mydetector": {
        "Type": "AWS::GuardDuty::Detector",
        "Properties": {
          "Enable": true
        }
      },

      "GDSNSTopic":{
        "Type":"AWS::SNS::Topic",
              "Properties" : {
              "DisplayName" : {"Fn::Join": [ "-", ["GuardDuty", { "Ref": "Environment"}, "SNSTopic"]] },
              "TopicName": {"Fn::Join": [ "-", ["GuardDuty", { "Ref": "Environment"}, "SNSTopic"]] }
          }
      },

      "GDCWEvent": {
        "Type": "AWS::Events::Rule",
        "Properties": {
          "Description": "GuardDuty Event Rule",
          "EventPattern":
          {
            "source": [
              "aws.guardduty"
            ],
            "detail-type": [
              "GuardDuty Finding"
            ]
          },
          "State": "ENABLED",
          "Targets": [{
            "Arn": { "Ref" : "GDSNSTopic"},
            "Id": "TargetGuardDutySNSTopic"
          }]
        }
      },

      "GuardDutySubscription" : {
        "Type" : "AWS::SNS::Subscription",
        "Properties" : {
          "Endpoint" : { "Ref" : "EmailAddress" },
          "Protocol" : "email",
          "TopicArn" : {"Ref" : "GDSNSTopic"}
        }
      }
    }
}

Summary

Amazon GuardDuty is a pretty useful service to help to automatically find potential security threats within your AWS accounts. The price isn’t unreasonable and the configuration is incredibly simple for a powerful tool such as machine learning. Try it out for yourself, at least for 30 days of a free trial.

The post Protect Your AWS Accounts with GuardDuty appeared first on The IT Hollow.

]]>
https://theithollow.com/2018/04/02/protect-your-aws-accounts-with-guardduty/feed/ 0 8651
Fill Your Skills Tank https://theithollow.com/2018/03/26/fill-skills-tank/ https://theithollow.com/2018/03/26/fill-skills-tank/#respond Mon, 26 Mar 2018 14:10:55 +0000 https://theithollow.com/?p=8635 Information Technology is a very difficult field to keep up with. Not only does computing power increase year after year, making the number of things we can do with computers increase, but drastic transformations always plague this industry. Complete paradigm shifts are a major part of our recent past such as mainframes, to client/server, to […]

The post Fill Your Skills Tank appeared first on The IT Hollow.

]]>
Information Technology is a very difficult field to keep up with. Not only does computing power increase year after year, making the number of things we can do with computers increase, but drastic transformations always plague this industry. Complete paradigm shifts are a major part of our recent past such as mainframes, to client/server, to virtualization to cloud computing. In addition to these changes there are also silos of technologies we might want to focus on such as database design, programming, infrastructure or cloud computing. Inside each of these categories there are different platforms to learn, such as if you are a programmer, do you know C++, Java, Python or Cobol?

So what is a technician to do? The important thing is to keep grinding through new technologies and never give up on learning new things. Your career depends on learning new things all the time.

An Example

The main constraint we have with learning is usually time. Time is fixed and there is only so much of it in a day. You need to take care of personal things, work, fun, family and studying. So if your skill capacity is based off of your time, then the skill capacity is also fairly fixed. Everyone’s skill capacity will be different because some people have more study time, less distractions and learn at different rates but in any case your own skill capacity is fairly fixed. As you learn new things, it’s like turning on a faucet that fills up your skills but pay attention that some of these skills will not be needed down the road. When technology changes and winners/loser are decided, some of the skills that you have may no longer be useful. Like the HD-DVD and the Zunes of the world, not all your skills will be valuable. You can think of your skill reservoir as having a drain with no stopper on it that slowly leaks your skills out into the ground. You need to learn faster than the drain lets out or you’ll not have any usable skills in the industry. I’ve often heard this referred to as walking up a down escalator. If you don’t keep moving, you’ll be at the bottom of the stairs again.

 

You have plenty of choices here such as what to learn, and how much of it to learn. For example, you could pick an individual technology and learn it to an expert level. I use the CCIE and VCDX certifications as an example of this in the diagram below. You’ve filled up your skill capacity with either a Cisco Certified Internetworking Expert (CCIE) or a VMware Certified Design Expert (VCDX) and that might fill up most of your skill capacity. The drain is still dribbling out your skills as new versions and updates come out. You need to continually learn new things to keep up with that expert level of knowledge.

You can also learn several different things at a lesser level of detail. In the example below, instead of learning a technology really deeply, we’ve learned several technologies at differing levels. A Cisco Certified Networking Professional (CCNP), Microsoft Certified Solutions Administrator (MCSA) or a VMware Certified Advanced Professional (VCAP). The benefit to this choice is that employers might be looking for one but not all of these skills. This broad level of knowledge might make you  more employable but you’re not an expert in any one technology. Again, the drain is still leaking these skills into the earth.

I can’t tell you want to learn but one of your goals should be to maintain skills that employers need. If you’re not picking the right technologies it may be more difficult to find work. In the example below, you have some skills but an employer is looking for other skills. Maybe you should consider changing what things to learn to fill up your skill capacity?

 

 

What Should I Learn?

At this point you see the goal, but now you’re asking what you should be learning to stay employable. That’s a pretty hard thing to get right. You can see major major technologies being used right now such as VMware Virtualization, Microsoft Operating Systems, or Amazon Web Services. You can pick these technologies which are probably a good choice, or you can bet on some of the new fun stuff that hasn’t taken off (at a massive scale) quite as much yet such as Containers or DevOps pipelines. If you pick the new hotness you might have some very valuable skills but you’re also making a bet that these will be important skills to have. If you lose the bet, that drain at the bottom of your skills tank might be open at full blast for those skills if they aren’t embraced by the industry.

My advice to you is this though. Just keep filling the tank with SOMETHING! As long as you keep learning things, you can adjust what is in the tank as you see the industry change directions. My best advice is to pick technologies that you think seem fun or are passionate about. It won’t seem like work if you continually learn things that you’re interested in. Start there and just keep filling the tank.

 

The post Fill Your Skills Tank appeared first on The IT Hollow.

]]>
https://theithollow.com/2018/03/26/fill-skills-tank/feed/ 0 8635
Woke to IT Age Discrimination https://theithollow.com/2018/03/12/woke-age-discrimination/ https://theithollow.com/2018/03/12/woke-age-discrimination/#comments Mon, 12 Mar 2018 14:06:04 +0000 https://theithollow.com/?p=8549 Age discrimination can be an issue in any industry, but this issue is something members of the information technology (IT) industry can specifically identify with. My goal for this post is just to shine some light on the topic and discuss whether or not there is an injustice happening in IT when you reach a […]

The post Woke to IT Age Discrimination appeared first on The IT Hollow.

]]>
Age discrimination can be an issue in any industry, but this issue is something members of the information technology (IT) industry can specifically identify with. My goal for this post is just to shine some light on the topic and discuss whether or not there is an injustice happening in IT when you reach a certain age, or if there is some less heinous reason why we see so many younger people in tech. I want to make it crystal clear that this is just an off the cuff discussion and not based on any discrimination that I’ve been witness to from my employer or anywhere else. Ageism has been a bit of the elephant in the room where I don’t see many people discussing it publicly, but it’s in the back of people’s mind. It does seem that there are many more young people in the technology industry than older people, but this also may just be a perception and not reality.

Discrimination

First of all, I want to define what I’m calling age discrimination. Like other types of discrimination its sometimes based on preconceived notions or stereotypes about a group of people such as if we said that everyone you’d meet in Mos Eisley spaceport are all scum and villains. It’s not true of everyone of course, because Obi Wan was in the spaceport and I doubt you’d consider him a villain.

 

But making a broad sweeping statement like that can be dangerous. What if we said, “Once you reach a certain age in the information technology field you’re not a desirable asset to companies any longer.” It’s not true because we probably all know people in the industry that are older than we are and are incredibly valuable resources, especially for their experience. But if we let the “older people aren’t good with technology” stereotype prevail then we’ve got some issues.

Employers who act upon a statement like this are likely breaking the law. Someone’s age shouldn’t be a good reason not to hire them and in the United States this practice can have legal consequences if it can be proven.

Maybe it’s Something Else?

Please understand that I do not condone the practice of discrimination, not only to age, but to race, sexual preference, gender, or political affiliation. But how about some less nefarious reasons why people of a certain age might be less hire-able in the technology sector.

Older People Make More Money

In many industries, your experience is a valued asset. As you get older, and more experienced, you can earn more money because that experience is something companies find desirable. That experience can help teach other people what you know, steer clear of pitfalls you’ve seen in your past, and just have more knowledge about a subject. This experience might earn you more money, which is great, but to your employer, you’re now an expensive resource. If managers can figure out how to cut costs without losing anything, they’ll certainly have to consider it.

In the IT world, your experience might have a shelf life. Consider, for a second, that a new company is starting up with a focus on public cloud. The cloud really isn’t that old, so if someone with 25 years of IT experience and someone with 5 years of experience apply for the same cloud position, do they have a different amount of “cloud” experience”? My guess here is that only a few years out of that 25 would be cloud related, so the two employees would really have about the same amount of “cloud” related work experience. Unfortunately, the person with 25 years of experience probably thinks that they should be paid for all of their experience and not just the cloud related stuff. They might be right too since there are many skills that are useful even if it’s not specific to your primary role. Things like knowing the industry, relating cloud to other data center concepts, working with teams, etc are all useful skills, but will the employer see it that way? Or will the employer see two people with the same cloud experience and one of them is much less expensive than the other?

Here is an example I just saw on twitter which illustrates this point pretty well.

Is Tech is Just Better Suited for Younger People?

My young son has never used a rotary phone or seen a compact disk. He has grown up his whole life with iPads, computers, and smart phones. He naturally has a mindset about technology whereas older people have had to learn each new technology as it came out. For my son, learning to use a touch screen, mobile device, voice activated devices, IoT device, etc is the same as learning how to use a fork. These things have been around his whole life and he’s grown up with them. Learning how to use them is second nature to him. In contrast, my own experiences might make me less suited to learn, or try to learn a new technology.

Take this example for instance. I write a lot of designs for customers and make a drawings to illustrate concepts. I’ve done this for quite a while and Microsoft Visio has long been the standard for this type of thing. I’m pretty good at using Visio and I haven’t found many instances where Visio couldn’t do something that I needed, so there isn’t an incentive for me to learn a competing product. However, many of my colleagues have begun to use LucidChart and love it. Both of us can get our jobs done, but what if we were both up for the same job and the employer was looking for experience with documents based on LucidChart diagrams? Maybe LucidChart is the new hot thing and people are gravitating towards it, or maybe it’s a fad, or whatever. The point here is that I might be seen as an older person who isn’t good with new technology and might lose the job to someone younger. In this case, it doesn’t mean that I’m really not as good with new tech, but I’ve not put a focus on re-learning a tool when the tool I have works well.

Take Aways

I’ve been in the industry for fifteen years and think the experience that I have is truly valuable to me personally and to my employer. The truth of the matter is that I think it is difficult for people in the technology industry as they get older. I don’t think this is some nefarious plot by employers to get rid of people when they hit a certain age, but I do think that younger people have some advantages in this industry that are hard to deny. An environment where everything is disrupted so frequently makes peoples experience slightly less valuable and younger people are less expensive.

What do we do as we get older? I don’t have all the answers here, but one thing seems important. Continuous education is critical. You can’t assume that since you have a lot of experience that you’ll always be needed. This industry, moves far too quickly to stop learning new things. You have to keep learning new technologies and staying on top of what’s changing in tech if you’re to stay relevant.

I know that when I’ve heard people discuss this topic, I’ve immediately jumped to the conclusion that companies might not be discriminating against people based on age, but that companies did want to hire younger people. I’ve not considered it too heavily because I still feel like I’m pretty young and this doesn’t impact me yet. But as I started to really think about it, it doesn’t seem like age discrimination but rather a set of circumstances that make younger people easier to hire. I’m not too sure if I have a call to action here in this post, but if I had one it would be this: “Don’t assume the technology industry is discriminating against older people or only want to hire young talent.”

I’m interested to hear your experiences and thoughts on the matter. Post your comments below.

The post Woke to IT Age Discrimination appeared first on The IT Hollow.

]]>
https://theithollow.com/2018/03/12/woke-age-discrimination/feed/ 2 8549