A new AWS region is coming to Stockholm, Sweden in 2018

Unless you’ve been living under a rock since… last week, you’d know about the AWS region coming to the Nordics. The new EU (Stockholm) region will be operational in 2018 and will have three availability zones — as of today only the EU (Ireland) region also has three AZs in Europe — located in Katrineholm, Västerås and Eskilstuna.

AWS Global Infrastructure: 16 + 3 regions

Official announcements

For over a decade, we’ve had a large number of Nordic customers building their businesses on AWS because we have much broader functionality than any other cloud provider, a significantly larger partner and customer ecosystem, and unmatched maturity, reliability, security and performance. The Nordic’s most successful startups, including iZettle, King, Mojang, and Supercell, as well as some of the most respected enterprises in the world, such as IKEA, Nokia, Scania, and Telenor, depend on AWS to run their businesses, enabling them to be more agile and responsive to their customers. An AWS Region in Stockholm enables Swedish and Nordic customers, with local latency or data sovereignty requirements, to move the rest of their applications to AWS and enjoy cost and agility advantages across their entire application portfolio.

— Andy Jassy, CEO at Amazon Web Services on Amazon Web Services Announces the Opening of Data Centers in Sweden in 2018

Today, I am happy to be able to tell you that we are planning to open up an AWS Region in Stockholm, Sweden in 2018. This region will give AWS partners and customers in Denmark, Finland, Iceland, Norway, and Sweden low-latency connectivity and the ability to run their workloads and store their data close to home.

— Jeff Barr, Chief Evangelist at Amazon Web Services on Coming in 2018 – New AWS Region in Sweden

Today, I am very excited to announce our plans to open a new AWS Region in the Nordics! The new region will give Nordic-based businesses, government organisations, non-profits, and global companies with customers in the Nordics, the ability to leverage the AWS technology infrastructure from data centers in Sweden. The new AWS EU (Stockholm) Region will have three Availability Zones and will be ready for customers to use in 2018.

— Werner Vogels, CTO at Amazon.com on Välkommen till Stockholm – An AWS Region is coming to the Nordics

This is obviously great news for Nordic-based customers as an AWS EU (Stockholm) region will, among other things, give them lower-latency connectivity and the possibility to physically store data in-country. Happy?

Alexa, would you marry me?

It has been almost a month since I came back from the fifth edition of re:Invent—the main Amazon Web Services conference held in Las Vegas each year—and I take it upon myself to go ahead and bother you with a take-away post heavily influenced by my personal interest in the Internet of Things and, to a higher degree, smart home technologies.

AWS re:Invent 2016

Have an Echo Dot on the house

With an Amazon Echo Dot for swag to the 30,000+ participants, an Alexa smart home booth in the expo hall and sessions with names like Capital One, Intel, Boston Children’s Hospital there was no doubt Amazon was, from the conference first day to the last, going to focus and ramp up its promotional efforts on its voice-activated speakers and the known or to be announced services that power them—one could not help but notice the release of the Google Home the same very month.

The Amazon Echo, its little brothers and sisters would be useless pieces of hardware (low-quality portable speakers at best) if they did not have skills and did not get new skills on a daily basis: with more than 4,000 skills the Alexa ecosystem is rapidly growing. These capabilities allow customers to interact with their devices using voice commands and follow, long story short, a basic request (you invoke a skill and formulate an intent) - response (Alexa replies) pattern with lots of magic in between.

Magic? Learn a few tricks

Getting started with the Alexa Skills Kit is quite straight forward and having an end-to-end simple but working Alexa custom skill a matter of minutes (see lroguet/amzn-alexa-skill-demo on GitHub for a Node.js skill template).

An Alexa skill can be hosted either as a web service or on AWS Lambda and I would suggest you look into the second option to begin with. Again, Amazon Web Services is giving the Alexa ecosystem the attention it (finally) deserves and announced a new partner program: the Alexa service delivery program for partners, and not directly related to Alexa I must admit, a new certification for partners: the IoT competency.

If you are unsure where to begin there are plenty of professional services firms in the Amazon Web Services Partner Network (APN) that could help you get started and Nordcloud—my employer as I write these lines—is one of them. May I suggest, for example, you begin with one of the AWS instructor-led trainings I will deliver in Copenhagen, Helsinki, Stockholm or Oslo in 2017?

The Amazon AI umbrella

A new AI platform and three new services is what was announced during the event’s first keynote: Amazon Rekognition (here), Amazon Polly (here) and, last but not least, Amazon Lex (what’s inside Alexa, here).

Amazon Lex is a new service for building conversational interfaces using voice and text. The same conversational engine that powers Alexa is now available to any developer, making it easy to bring sophisticated, natural language ‘chatbots’ to new and existing applications. The power of Alexa in the hands of every developer, without having to know deep learning technologies like speech recognition, has the potential of sparking innovation in entirely new categories of products and services. - Bringing the Magic of Amazon AI and Alexa to Apps on AWS

While I have not had the time to play with any of these services yet, I already have a few home project ideas but also feature ideas to try out with customers as soon as possible next year!

Can we get some more please!

Participants are now back to their respective employers and some of them already looking into adding voice—starting with an Alexa skill—as yet another user interface to their product(s). While this looks exciting and likely to make life at the office a bit more fun on paper, a number of teams already came back to reality as they realized they cannot get their hands on hardware.

What is Amazon Echo Dot?

That’s right. Amazon’s Alexa-enabled devices are currently only available for delivery in the U.S and in the U.K which leaves out 194 countries (if you consider Taiwan a country). You might argue that one could develop a skill without dedicated hardware and use a web-based interface like Echosim.io to interact with Alexa. Granted. But how sexy is that in a demo to other teams or, more importantly, a demo to those who have the power (read money) to turn prototypes into products?

One would think Amazon Web Services had considered the issue but as far as I know even branch offices—in Scandinavia at least—cannot help acquire hardware. Customers are on their own and left with nothing to work with: quite an interesting situation the Earth’s Most Customer-Centric Company has put itself into.

So. Would you marry me?

Amazon definitely put a lot of emphasis on voice recognition—whether through Alexa enabled devices or purely as services of the AI platform—but there are still a few things to iron out before the technology gets real traction. Availability is, as far as I, and a few customers of mine I talked to since the event, am concerned, a major issue that needs to be addressed very soon or Alexa will not get much more than the 250,000 marriage proposals she has had so far.

Unless Echo, Echo Dot and the whole gang are released globally the relationship with the 194 other countries might just keep on being what it is today, somewhat distant. Or as Alexa would put it: “Let’s just be friends”.

What happens in Vegas...

… will most likely be available all over the interweb within seconds. I will therefore spare you (and me) all the low quality posts and will make the most of the sessions, after hours events and enjoy my first re:Invent instead.

AWS re:Invent 2016

Or maybe I actually will push fresh content to my online notes. Stay tuned.

Home Assistant, Docker and a Raspberry Pi

A few weeks ago I decided it was time to deal with one of my 99 first world problems and simplify how I interact with the connected objects I tend to scatter around the house.

So I wasted a couple of hours searching the interweb - I obviously ended up watching Youtube videos too many times - and found a rather interesting open sourced project to help control these devices but also track their states and automate some: Home Assistant

Home Assistant

Home Assistant is an open-source home automation platform running on Python 3. Track and control all devices at home and automate control. Installation in less than a minute.

But while the folks behind Home Assistant do a terrific job I soon realized (pretty much right after the next version came out) the Raspberry Pi compatible Docker image was not built and pushed to the public repository during the release process (no complaints here, the folks behind Home Assistant do a terrific job already).

So how do I make sure I always run the latest version - because now I have to always run the latest version of that tool I didn’t know existed a few days ago right? - within an hour following a new release?

  1. Open a code editor
  2. Start typing

Automated build

If you want to build your own Docker image with Home Assistant you could have a look at the bash script I wrote otherwise you’re just a Docker pull away from running one of my automated builds pushed to lroguet/rpi-home-assistant.

Please have a look at the Github project for more detailed instructions.

And here I am. With a sweet Home Assistant dashboard.

Home Assistant dashboard

Limitations

My Raspberry Pi compatible Docker images with Home Assistant only have the standard Python package installed. If you’re planning on integrating your Z-Wave devices you’d either have to do some work on your own or leave a comment below. If I’m not back to Youtube videos I could eventually give it a try.

Nginx reverse proxy, Docker and a Raspberry Pi

If you read my previous post you should know that fourteenislands.io is served by a Nginx web server (Docker) running on a Raspberry Pi. But what started as a sandbox environment to host a few static pages is getting busier everyday and I, among other things, needed to host a couple of RESTful web APIs on that Raspberry Pi (on a different domain name).

Automated Nginx reverse proxy for Docker

I’ll spare you with the details as to why a reverse proxy and how to automatically generate reverse proxy configuration when ‘backend’ Docker containers are started and stopped and suggest you read this interesting post by Jason Wilder.

Jason’s nginx-proxy targets the X86 architecture and cannot be used as is on a Raspberry Pi. You could use rpi-nginx-proxy instead which is the equivalent I published for the ARM architecture.

So, let’s start a proxy.

From a terminal on your RPi open your favorite text editor and save the following as docker-compose.yml

Start the Nginx reverse proxy ‘frontend’ container with the following command

$ docker-compose run -d --service-ports nginx-proxy

You don’t really need Docker Compose here and could run the same ‘frontend’ container with that one-liner:

$ docker run -d -p 80:80 -v /var/run/docker.sock:/tmp/docker.sock:ro lroguet/rpi-nginx-proxy

That was easy, wasn’t it?

Serving a domain (fourteenislands.io for example)

Now that the reverse proxy is up and running we can just start whatever ‘backend’ container we want to serve content for a specific domain (note the VIRTUAL_HOST environment variable in the docker-compose.yml file below).

From a terminal on your RPi open your favorite text editor and save the following as docker-compose.yml

Start the Nginx ‘backend’ container with the following command

$ docker-compose run -d --service-ports web

Using Docker Compose makes more sense here but is obviously not required.

Starting a ‘backend’ container triggers a Docker start event, the reverse proxy configuration is generated and the ‘frontend’ Nginx reloaded.

Received event start for container...

When a ‘backend’ container is stopped or dies, a Docker die/stop event is triggered, the reverse proxy configuration is generated and the ‘frontend’ Nginx reloaded.

Received event die for container...

Monitoring dashboard for Docker

https://p.datadoghq.com/sb/8e976d062-dcf3f6e295

As always, have fun and if you have any questions, please leave a comment.

Update. fourteenislands.io is not currently served from a Raspberry Pi but from an Amazon S3 bucket.

Amazon Web Services, nginx, Docker, Hugo and a Raspberry Pi

Hosting a static webiste on a Raspberry Pi (RPi from now on) is quite straight forward. Serving that same website from a Dockerized Nginx HTTP server -on that same RPi- is a bit more interesting. Sure.

But what if the RPi decides to take the day off?

This post is mainly for students who attended one of my Architecting on AWS or System Operations on AWS sessions. It contains instructions, links to templates, tools and scripts to get started with Amazon Route 53 DNS failover between a primary site hosted on a RPi and a secondary site hosted on Amazon S3 & Amazon CloudFront. It is however not an exhaustive list of all the steps required for such a setup: use it at your own risk.

Raspberry Pi

First you may want to give your RPi some HypriotOS love. There are other Linux distributions available for the Raspberry Pi but HypriotOS has, in my opinion, the best Docker support at the moment.

Web

The page you’re reading right now (the whole fourteenislands.io domain for that matter) is written in Markdown and converted to a static site (HTML) with Hugo. Please go (no pun intended) have a look and come back here when you’ve written that novel of yours. I’ll wait.

Done? Good. You now want to serve the site you’ve generated directly from your RPi. We’ll do that with Nginx running in a Docker container. I’ve prepared some for you.

From a terminal on your RPi open your favorite text editor and save the following as docker-compose.yml

Start the Nginx container with the following command

$ docker-compose run -d --service-ports nginx

Amazon Route 53

In this section I assume you already have created an S3 bucket to store a copy of your static website and set up a CloudFront distribution to distribute that content (see Using CloudFront with Amazon S3 for instructions). I also assume you have the AWS CLI installed and configured.

Create DNS records and an health check with CloudFormation

First we want to create a primary record, an health check for that record and a secondary record to failover to when the primary record is not available.

Create a CloudFormation stack from that template directly in the AWS console or save the template locally, hit the terminal and run the following script(replace the ParameterValue values with your own):

Update health check and primary DNS record

Once the health check, primary and secondary records have been created using the above CloudFormation template, the health check IPAddress and the primary record ResourceRecords should be automatically (hourly job on the RPi for example) updated with the following script.

Have fun and if you have any questions, please leave a comment.

Update. fourteenislands.io is not currently served from a Raspberry Pi but from an Amazon S3 bucket.

Altitude: widget for Connect IQ compatible Garmin devices

Altitude is a simple widget displaying the current altitude. The widget does not rely on the built-in barometric altimeter but retrieves the altitude from a third party elevation service (such as the Google Elevation API) based on the GPS position.

For feedback, bug reports and feature requests please leave a message in the comments section at the bottom of the page.

Altitude is available in the Garmin Connect IQ Store since May 18, 2015.

Changelog

0.6.1 Switched back to HTTP since Garmin Connect Mobile can’t handle Amazon Web Services certificates.
0.6 Switched to HTTPS. Fixed Monkey C version compatibility check. Built with SDK 1.2.11.
0.5.1 Added support for fēnix 3 HR and vívoactive HR.
0.5 Moved backend to Amazon Web Services (Route 53, CloudFront, API Gateway & Lambda). Built with SDK 1.2.6.
0.4 Added support for fēnix™ 3 and D2 Bravo.
0.3 Added check for incompatible firmware. Built with SDK 1.1.2.
0.2 Added support for Forerunner® 920XT.
0.1 First release. Supports meters and feet depending on device settings. epix™ & vívoactive™.

Dashboard

Since version 0.5 the Altitude widget relies on a truly serverless backend powered by Amazon API Gateway & AWS Lambda. Below is a Datadog dashboard showing some of the “behind the scenes” metrics.

Datadog dashboard https://p.datadoghq.com/sb/8e976d062-b509fa50d7

Disclaimer

I only own a Garmin vívoactive™ watch and can only test the Altitude widget on other Garmin models through the Connect IQ simulator. I, therefore, cannot guarantee that the Altitude widget runs smoothly on the Forerunner® 920XT, epix™, fēnix™ 3, D2 Bravo or any other Connect IQ compatible Garmin device.

AWS Certified SysOps Administrator - Associate level sample exam questions and answers

The AWS Certified SysOps Administrator – Associate exam validates technical expertise in deployment, management, and operations on the AWS platform. Exam concepts you should understand for this exam include: deploying, managing, and operating scalable, highly available, and fault tolerant systems on AWS, migrating an existing on-premises application to AWS, implementing and controlling the flow of data to and from AWS, selecting the appropriate AWS service based on compute, data, or security requirements, identifying appropriate use of AWS operational best practices, estimating AWS usage costs and identifying operational cost control mechanisms. - Amazon Web Services

A. Importing data and optimizing queries
B. Installing and periodically patching the database software
C. Creating and maintaining automated database backups with a point-in-time recovery of up to five minutes
D. Creating and maintaining automated database backups in compliance with regulatory long-term retention requirements

Answers. B & C

You maintain an application on AWS to provide development and test platforms for your developers. Currently both environments consist of an m1.small EC2 instance. Your developers notice performance degradation as they increase network load in the test environment. How would you mitigate these performance issues in the test environment?

A. Upgrade the m1.small to a larger instance type
B. Add an additional ENI to the test instance
C. Use the EBS optimized option to offload EBS traffic
D. Configure Amazon Cloudwatch to provision more network bandwidth when network utilization exceeds 80%

Answer. A

Per the AWS Acceptable Use Policy, penetration testing of EC2 instances:

A. may be performed by the customer against their own instances, only if performed from EC2 instances.
B. may be performed by AWS, and is periodically performed by AWS.
C. may be performed by AWS, and will be performed by AWS upon customer request.
D. are expressly prohibited under all circumstances.
E. may be performed by the customer against their own instances with prior authorization from AWS.

Answer. E

You have been tasked with identifying an appropriate storage solution for a NoSQL database that requires random I/O reads of greater than 100,000 4kB IOPS. Which EC2 option will meet this requirement?

A. EBS provisioned IOPS
B. SSD instance store
C. EBS optimized instances
D. High Storage instance configured in RAID 10

Answer. D

Instance A and instance B are running in two different subnets A and B of a VPC. Instance A is not able to ping instance B. What are two possible reasons for this? (Pick 2 correct answers)

A. The routing table of subnet A has no target route to subnet B
B. The security group attached to instance B does not allow inbound ICMP traffic
C. The policy linked to the IAM role on instance A is not configured correctly
D. The NACL on subnet B does not allow outbound ICMP traffic

Answers. B & D

Your web site is hosted on 10 EC2 instances in 5 regions around the globe with 2 instances per region. How could you configure your site to maintain site availability with minimum downtime if one of the 5 regions was to lose network connectivity for an extended period of time?

A. Create an Elastic Load Balancer to place in front of the EC2 instances. Set an appropriate health check on each ELB.
B. Establish VPN Connections between the instances in each region. Rely on BGP to failover in the case of a region wide connectivity outage
C. Create a Route 53 Latency Based Routing Record Set that resolves to an Elastic Load Balancer in each region. Set an appropriate health check on each ELB.
D. Create a Route 53 Latency Based Routing Record Set that resolves to Elastic Load Balancers in each region and has the Evaluate Target Health flag set to true.

Answer. D

You run a stateless web application with the following components: Elastic Load Balancer (ELB), 3 Web/Application servers on EC2, and 1 MySQL RDS database with 5000 Provisioned IOPS. Average response time for users is increasing. Looking at CloudWatch, you observe 95% CPU usage on the Web/Application servers and 20% CPU usage on the database. The average number of database disk operations varies between 2000 and 2500. Which two options could improve response times? (Pick 2 correct answers)

A. Choose a different EC2 instance type for the Web/Application servers with a more appropriate CPU/memory ratio
B. Use Auto Scaling to add additional Web/Application servers based on a CPU load threshold
C. Increase the number of open TCP connections allowed per web/application EC2 instance
D. Use Auto Scaling to add additional Web/Application servers based on a memory usage threshold

Answers. A & B

Which features can be used to restrict access to data in S3? (Pick 2 correct answers)

A. Create a CloudFront distribution for the bucket.
B. Set an S3 bucket policy.
C. Use S3 Virtual Hosting.
D. Set an S3 ACL on the bucket or the object.
E. Enable IAM Identity Federation.

Answers. B & D

You need to establish a backup and archiving strategy for your company using AWS. Documents should be immediately accessible for 3 months and available for 5 years for compliance reasons. Which AWS service fulfills these requirements in the most cost effective way?

A. Use StorageGateway to store data to S3 and use life-cycle policies to move the data into Redshift for long-time archiving
B. Use DirectConnect to upload data to S3 and use IAM policies to move the data into Glacier for longtime archiving
C. Upload the data on EBS, use life-cycle policies to move EBS snapshots into S3 and later into Glacier for long-time archiving
D. Upload data to S3 and use life-cycle policies to move the data into Glacier for long-time archiving

Answer. D

Given the following IAM policy

{
  "Version": "2012-10-17",
  "Statement": [
  {
    "Effect": "Allow",
    "Action": [
      "s3:Get*",
      "s3:List*"
    ],
    "Resource": "*"
  },
  {
    "Effect": "Allow",
    "Action": "s3:PutObject",
    "Resource": "arn:aws:s3:::corporate_bucket/*"
  }
  ]
}

What does the IAM policy allow? (Pick 3 correct answers)

A. The user is allowed to read objects from all S3 buckets owned by the account
B. The user is allowed to write objects into the bucket named ‘corporate_bucket’
C. The user is allowed to change access rights for the bucket named ‘corporate_bucket’
D. The user is allowed to read objects in the bucket named ‘corporate_bucket’ but not allowed to list the objects in the bucket
E. The user is allowed to read objects from the bucket named ‘corporate_bucket’

Answers. A, B & E

AWS Certified SysOps Administrator - Associate Level Sample Exam Questions

AWS Certified Solutions Architect - Associate level sample exam questions and answers

The AWS Certified Solutions Architect – Associate exam is intended for individuals with experience designing distributed applications and systems on the AWS platform. Exam concepts you should understand for this exam include: designing and deploying scalable, highly available, and fault tolerant systems on AWS; lift and shift of an existing on-premises application to AWS; ingress and egress of data to and from AWS; selecting the appropriate AWS service based on data, compute, database, or security requirements; identifying appropriate use of AWS architectural best practices; estimating AWS costs and identifying cost control mechanisms. - Amazon Web Services

Amazon Glacier is designed for: (Choose 2 answers)

A. active database storage.
B. infrequently accessed data.
C. data archives.
D. frequently accessed data.
E. cached session data.

To keep costs low, Amazon Glacier is optimized for infrequently accessed data where a retrieval time of several hours is suitable.

Answers. B & C

Your web application front end consists of multiple EC2 instances behind an Elastic Load Balancer. You configured ELB to perform health checks on these EC2 instances. If an instance fails to pass health checks, which statement will be true?

A. The instance is replaced automatically by the ELB.
B. The instance gets terminated automatically by the ELB.
C. The ELB stops sending traffic to the instance that failed its health check.
D. The instance gets quarantined by the ELB for root cause analysis.

Answer. C

You are building a system to distribute confidential training videos to employees. Using CloudFront, what method could be used to serve content that is stored in S3, but not publically accessible from S3 directly?

A. Create an Origin Access Identity (OAI) for CloudFront and grant access to the objects in your S3 bucket to that OAI.
B. Add the CloudFront account security group “amazon-cf/amazon-cf-sg” to the appropriate S3 bucket policy.
C. Create an Identity and Access Management (IAM) User for CloudFront and grant access to the objects in your S3 bucket to that IAM User.
D. Create a S3 bucket policy that lists the CloudFront distribution ID as the Principal and the target bucket as the Amazon Resource Name (ARN).

Answer. A
Using an Origin Access Identity to Restrict Access to Your Amazon S3 Content

Which of the following will occur when an EC2 instance in a VPC (Virtual Private Cloud) with an associated Elastic IP is stopped and started? (Choose 2 answers)

A. The Elastic IP will be dissociated from the instance
B. All data on instance-store devices will be lost
C. All data on EBS (Elastic Block Store) devices will be lost
D. The ENI (Elastic Network Interface) is detached
E. The underlying host for the instance is changed

Answers. B & E

In the basic monitoring package for EC2, Amazon CloudWatch provides the following metrics:

A. web server visible metrics such as number failed transaction requests
B. operating system visible metrics such as memory utilization
C. database visible metrics such as number of connections
D. hypervisor visible metrics such as CPU utilization

Answer. D

Which is an operational process performed by AWS for data security?

A. AES-256 encryption of data stored on any shared storage device
B. Decommissioning of storage devices using industry-standard practices
C. Background virus scans of EBS volumes and EBS snapshots
D. Replication of data across multiple AWS Regions
E. Secure wiping of EBS data when an EBS volume is unmounted

Answer. B

To protect S3 data from both accidental deletion and accidental overwriting, you should:

A. enable S3 versioning on the bucket
B. access S3 data using only signed URLs
C. disable S3 delete using an IAM bucket policy
D. enable S3 Reduced Redundancy Storage
E. enable Multi-Factor Authentication (MFA) protected access

Answer. A

AWS Certified Solutions Architect – Associate Level Sample Exam Questions

be@t: Swatch Internet Time widget for Connect IQ compatible Garmin devices

Swatch Internet Time (or beat time) is a decimal time concept introduced in 1998 by the Swatch corporation as part of their marketing campaign for their line of “Beat” watches. Instead of hours and minutes, the mean solar day is divided up into 1000 parts called “.beats”. Each .beat lasts 1 minute and 26.4 (86.4) seconds. Times are notated as a 3-digit number out of 1000 after midnight. So, @248 would indicate a time 248 .beats after midnight representing 2481000 of a day, just over 5 hours and 57 minutes – Wikipedia

For feedback, bug reports and feature requests please leave a message in the comments section at the bottom of the page.

be@t is available in the Garmin Connect IQ Store since May 26, 2015.

Changelog

0.1 First release.

Disclaimer

I only own a Garmin vívoactive™ watch and can only test the be@t widget on other Garmin models through the Connect IQ simulator. I, therefore, cannot guarantee that the be@t widget runs smoothly on the Forerunner® 920XT, epix™, fēnix™ 3 or D2 Bravo.