What happens in Vegas...

… will most likely be available all over the interweb within seconds. I will therefore spare you (and me) all the low quality posts and will make the most of the sessions, after hours events and enjoy my first re:Invent instead.

AWS re:Invent 2016

Or maybe I actually will push fresh content to my online notes. Stay tuned.

Home Assistant, Docker and a Raspberry Pi

A few weeks ago I decided it was time to deal with one of my 99 first world problems and simplify how I interact with the connected objects I tend to scatter around the house.

So I wasted a couple of hours searching the interweb - I obviously ended up watching Youtube videos too many times - and found a rather interesting open sourced project to help control these devices but also track their states and automate some: Home Assistant

Home Assistant

Home Assistant is an open-source home automation platform running on Python 3. Track and control all devices at home and automate control. Installation in less than a minute.

But while the folks behind Home Assistant do a terrific job I soon realized (pretty much right after the next version came out) the Raspberry Pi compatible Docker image was not built and pushed to the public repository during the release process (no complaints here, the folks behind Home Assistant do a terrific job already).

So how do I make sure I always run the latest version - because now I have to always run the latest version of that tool I didn’t know existed a few days ago right? - within an hour following a new release?

  1. Open a code editor
  2. Start typing

Automated build

If you want to build your own Docker image with Home Assistant you could have a look at the bash script I wrote otherwise you’re just a Docker pull away from running one of my automated builds pushed to lroguet/rpi-home-assistant.

Please have a look at the Github project for more detailed instructions.

And here I am. With a sweet Home Assistant dashboard.

Home Assistant dashboard

Limitations

My Raspberry Pi compatible Docker images with Home Assistant only have the standard Python package installed. If you’re planning on integrating your Z-Wave devices you’d either have to do some work on your own or leave a comment below. If I’m not back to Youtube videos I could eventually give it a try.

Nginx reverse proxy, Docker and a Raspberry Pi

If you read my previous post you should know that fourteenislands.io is served by a Nginx web server (Docker) running on a Raspberry Pi. But what started as a sandbox environment to host a few static pages is getting busier everyday and I, among other things, needed to host a couple of RESTful web APIs on that Raspberry Pi (on a different domain name).

Automated Nginx reverse proxy for Docker

I’ll spare you with the details as to why a reverse proxy and how to automatically generate reverse proxy configuration when ‘backend’ Docker containers are started and stopped and suggest you read this interesting post by Jason Wilder.

Jason’s nginx-proxy targets the X86 architecture and cannot be used as is on a Raspberry Pi. You could use rpi-nginx-proxy instead which is the equivalent I published for the ARM architecture.

So, let’s start a proxy.

From a terminal on your RPi open your favorite text editor and save the following as docker-compose.yml

Start the Nginx reverse proxy ‘frontend’ container with the following command

$ docker-compose run -d --service-ports nginx-proxy

You don’t really need Docker Compose here and could run the same ‘frontend’ container with that one-liner:

$ docker run -d -p 80:80 -v /var/run/docker.sock:/tmp/docker.sock:ro lroguet/rpi-nginx-proxy

That was easy, wasn’t it?

Serving a domain (fourteenislands.io for example)

Now that the reverse proxy is up and running we can just start whatever ‘backend’ container we want to serve content for a specific domain (note the VIRTUAL_HOST environment variable in the docker-compose.yml file below).

From a terminal on your RPi open your favorite text editor and save the following as docker-compose.yml

Start the Nginx ‘backend’ container with the following command

$ docker-compose run -d --service-ports web

Using Docker Compose makes more sense here but is obviously not required.

Starting a ‘backend’ container triggers a Docker start event, the reverse proxy configuration is generated and the ‘frontend’ Nginx reloaded.

Received event start for container...

When a ‘backend’ container is stopped or dies, a Docker die/stop event is triggered, the reverse proxy configuration is generated and the ‘frontend’ Nginx reloaded.

Received event die for container...

Monitoring dashboard for Docker

https://p.datadoghq.com/sb/8e976d062-dcf3f6e295

As always, have fun and if you have any questions, please leave a comment.

Update. fourteenislands.io is not currently served from a Raspberry Pi but from an Amazon S3 bucket.

Amazon Web Services, nginx, Docker, Hugo and a Raspberry Pi

Hosting a static webiste on a Raspberry Pi (RPi from now on) is quite straight forward. Serving that same website from a Dockerized Nginx HTTP server -on that same RPi- is a bit more interesting. Sure.

But what if the RPi decides to take the day off?

This post is mainly for students who attended one of my Architecting on AWS or System Operations on AWS sessions. It contains instructions, links to templates, tools and scripts to get started with Amazon Route 53 DNS failover between a primary site hosted on a RPi and a secondary site hosted on Amazon S3 & Amazon CloudFront. It is however not an exhaustive list of all the steps required for such a setup: use it at your own risk.

Raspberry Pi

First you may want to give your RPi some HypriotOS love. There are other Linux distributions available for the Raspberry Pi but HypriotOS has, in my opinion, the best Docker support at the moment.

Web

The page you’re reading right now (the whole fourteenislands.io domain for that matter) is written in Markdown and converted to a static site (HTML) with Hugo. Please go (no pun intended) have a look and come back here when you’ve written that novel of yours. I’ll wait.

Done? Good. You now want to serve the site you’ve generated directly from your RPi. We’ll do that with Nginx running in a Docker container. I’ve prepared some for you.

From a terminal on your RPi open your favorite text editor and save the following as docker-compose.yml

Start the Nginx container with the following command

$ docker-compose run -d --service-ports nginx

Amazon Route 53

In this section I assume you already have created an S3 bucket to store a copy of your static website and set up a CloudFront distribution to distribute that content (see Using CloudFront with Amazon S3 for instructions). I also assume you have the AWS CLI installed and configured.

Create DNS records and an health check with CloudFormation

First we want to create a primary record, an health check for that record and a secondary record to failover to when the primary record is not available.

Create a CloudFormation stack from that template directly in the AWS console or save the template locally, hit the terminal and run the following script(replace the ParameterValue values with your own):

Update health check and primary DNS record

Once the health check, primary and secondary records have been created using the above CloudFormation template, the health check IPAddress and the primary record ResourceRecords should be automatically (hourly job on the RPi for example) updated with the following script.

Have fun and if you have any questions, please leave a comment.

Update. fourteenislands.io is not currently served from a Raspberry Pi but from an Amazon S3 bucket.

System Operations on AWS

System Operations on AWS is designed to teach those in a Systems Administrator or Developer Operations (DevOps) role how to create automatable and repeatable deployments of networks and systems on the AWS platform. The course covers the specific AWS features and tools related to configuration and deployment, as well as common techniques used throughout the industry for configuring and deploying systems. – Amazon Web Services

Module 3: Networking on AWS

Module 4: Storage in the Cloud

Tips & tricks

Tools

  • SAWS, Supercharged AWS Command Line Interface (CLI)
  • Papertrail command-line tail & search client for Papertrail log management service

I

AWS Technical Essentials

AWS Technical Essentials introduces you to AWS products, services, and common solutions. It provides IT technical end users with basic fundamentals to become more proficient in identifying AWS services so that you can make informed decisions about IT solutions based on your business requirements. – Amazon Web Services

Architecting on AWS

Architecting on AWS covers the fundamentals of AWS. It is designed to teach Solution Architects how to optimize the use of the AWS Cloud by understanding AWS services and how these services fit into a cloud solution. – Amazon Web Services

Infrastructure as Code

Tools

  • SAWS, Supercharged AWS Command Line Interface (CLI)
  • Papertrail command-line tail & search client for Papertrail log management service

III

Amazon Web Services

Amazon Web Services (AWS), a collection of remote computing services, also called web services, make up a cloud-computing platform offered by Amazon.com. These services operate from 11 geographical regions across the world. The most central and well-known of these services arguably include Amazon Elastic Compute Cloud and Amazon S3. Amazon markets these products as a service to provide large computing-capacity more quickly and more cheaply than a client company building an actual physical server farm. – Wikipedia

Trainings

Resources

AWS official channels

Certification

Altitude: widget for Connect IQ compatible Garmin devices

Altitude is a simple widget displaying the current altitude. The widget does not rely on the built-in barometric altimeter but retrieves the altitude from a third party elevation service (such as the Google Elevation API) based on the GPS position.

For feedback, bug reports and feature requests please leave a message in the comments section at the bottom of the page.

Altitude is available in the Garmin Connect IQ Store since May 18, 2015.

Changelog

0.6.1 Switched back to HTTP since Garmin Connect Mobile can’t handle Amazon Web Services certificates.
0.6 Switched to HTTPS. Fixed Monkey C version compatibility check. Built with SDK 1.2.11.
0.5.1 Added support for fēnix 3 HR and vívoactive HR.
0.5 Moved backend to Amazon Web Services (Route 53, CloudFront, API Gateway & Lambda). Built with SDK 1.2.6.
0.4 Added support for fēnix™ 3 and D2 Bravo.
0.3 Added check for incompatible firmware. Built with SDK 1.1.2.
0.2 Added support for Forerunner® 920XT.
0.1 First release. Supports meters and feet depending on device settings. epix™ & vívoactive™.

Dashboard

Since version 0.5 the Altitude widget relies on a truly serverless backend powered by Amazon API Gateway & AWS Lambda. Below is a Datadog dashboard showing some of the “behind the scenes” metrics.

Datadog dashboard https://p.datadoghq.com/sb/8e976d062-b509fa50d7

Disclaimer

I only own a Garmin vívoactive™ watch and can only test the Altitude widget on other Garmin models through the Connect IQ simulator. I, therefore, cannot guarantee that the Altitude widget runs smoothly on the Forerunner® 920XT, epix™, fēnix™ 3, D2 Bravo or any other Connect IQ compatible Garmin device.

AWS Certified SysOps Administrator - Associate level sample exam questions and answers

The AWS Certified SysOps Administrator – Associate exam validates technical expertise in deployment, management, and operations on the AWS platform. Exam concepts you should understand for this exam include: deploying, managing, and operating scalable, highly available, and fault tolerant systems on AWS, migrating an existing on-premises application to AWS, implementing and controlling the flow of data to and from AWS, selecting the appropriate AWS service based on compute, data, or security requirements, identifying appropriate use of AWS operational best practices, estimating AWS usage costs and identifying operational cost control mechanisms. - Amazon Web Services

A. Importing data and optimizing queries
B. Installing and periodically patching the database software
C. Creating and maintaining automated database backups with a point-in-time recovery of up to five minutes
D. Creating and maintaining automated database backups in compliance with regulatory long-term retention requirements

Answers. B & C

You maintain an application on AWS to provide development and test platforms for your developers. Currently both environments consist of an m1.small EC2 instance. Your developers notice performance degradation as they increase network load in the test environment. How would you mitigate these performance issues in the test environment?

A. Upgrade the m1.small to a larger instance type
B. Add an additional ENI to the test instance
C. Use the EBS optimized option to offload EBS traffic
D. Configure Amazon Cloudwatch to provision more network bandwidth when network utilization exceeds 80%

Answer. A

Per the AWS Acceptable Use Policy, penetration testing of EC2 instances:

A. may be performed by the customer against their own instances, only if performed from EC2 instances.
B. may be performed by AWS, and is periodically performed by AWS.
C. may be performed by AWS, and will be performed by AWS upon customer request.
D. are expressly prohibited under all circumstances.
E. may be performed by the customer against their own instances with prior authorization from AWS.

Answer. E

You have been tasked with identifying an appropriate storage solution for a NoSQL database that requires random I/O reads of greater than 100,000 4kB IOPS. Which EC2 option will meet this requirement?

A. EBS provisioned IOPS
B. SSD instance store
C. EBS optimized instances
D. High Storage instance configured in RAID 10

Answer. D

Instance A and instance B are running in two different subnets A and B of a VPC. Instance A is not able to ping instance B. What are two possible reasons for this? (Pick 2 correct answers)

A. The routing table of subnet A has no target route to subnet B
B. The security group attached to instance B does not allow inbound ICMP traffic
C. The policy linked to the IAM role on instance A is not configured correctly
D. The NACL on subnet B does not allow outbound ICMP traffic

Answers. B & D

Your web site is hosted on 10 EC2 instances in 5 regions around the globe with 2 instances per region. How could you configure your site to maintain site availability with minimum downtime if one of the 5 regions was to lose network connectivity for an extended period of time?

A. Create an Elastic Load Balancer to place in front of the EC2 instances. Set an appropriate health check on each ELB.
B. Establish VPN Connections between the instances in each region. Rely on BGP to failover in the case of a region wide connectivity outage
C. Create a Route 53 Latency Based Routing Record Set that resolves to an Elastic Load Balancer in each region. Set an appropriate health check on each ELB.
D. Create a Route 53 Latency Based Routing Record Set that resolves to Elastic Load Balancers in each region and has the Evaluate Target Health flag set to true.

Answer. D

You run a stateless web application with the following components: Elastic Load Balancer (ELB), 3 Web/Application servers on EC2, and 1 MySQL RDS database with 5000 Provisioned IOPS. Average response time for users is increasing. Looking at CloudWatch, you observe 95% CPU usage on the Web/Application servers and 20% CPU usage on the database. The average number of database disk operations varies between 2000 and 2500. Which two options could improve response times? (Pick 2 correct answers)

A. Choose a different EC2 instance type for the Web/Application servers with a more appropriate CPU/memory ratio
B. Use Auto Scaling to add additional Web/Application servers based on a CPU load threshold
C. Increase the number of open TCP connections allowed per web/application EC2 instance
D. Use Auto Scaling to add additional Web/Application servers based on a memory usage threshold

Answers. A & B

Which features can be used to restrict access to data in S3? (Pick 2 correct answers)

A. Create a CloudFront distribution for the bucket.
B. Set an S3 bucket policy.
C. Use S3 Virtual Hosting.
D. Set an S3 ACL on the bucket or the object.
E. Enable IAM Identity Federation.

Answers. B & D

You need to establish a backup and archiving strategy for your company using AWS. Documents should be immediately accessible for 3 months and available for 5 years for compliance reasons. Which AWS service fulfills these requirements in the most cost effective way?

A. Use StorageGateway to store data to S3 and use life-cycle policies to move the data into Redshift for long-time archiving
B. Use DirectConnect to upload data to S3 and use IAM policies to move the data into Glacier for longtime archiving
C. Upload the data on EBS, use life-cycle policies to move EBS snapshots into S3 and later into Glacier for long-time archiving
D. Upload data to S3 and use life-cycle policies to move the data into Glacier for long-time archiving

Answer. D

Given the following IAM policy

{
  "Version": "2012-10-17",
  "Statement": [
  {
    "Effect": "Allow",
    "Action": [
      "s3:Get*",
      "s3:List*"
    ],
    "Resource": "*"
  },
  {
    "Effect": "Allow",
    "Action": "s3:PutObject",
    "Resource": "arn:aws:s3:::corporate_bucket/*"
  }
  ]
}

What does the IAM policy allow? (Pick 3 correct answers)

A. The user is allowed to read objects from all S3 buckets owned by the account
B. The user is allowed to write objects into the bucket named ‘corporate_bucket’
C. The user is allowed to change access rights for the bucket named ‘corporate_bucket’
D. The user is allowed to read objects in the bucket named ‘corporate_bucket’ but not allowed to list the objects in the bucket
E. The user is allowed to read objects from the bucket named ‘corporate_bucket’

Answers. A, B & E

AWS Certified SysOps Administrator - Associate Level Sample Exam Questions