In an earlier blog entry of this series about static website hosting on Amazon Web Services I wrote on the few mistakes Taxi 020–or rather Cabonline Technologies–made in handling their website’s infrastructure on AWS and how these mistakes could have been mitigated by leveraging Amazon Simple Storage Service (S3) together with CloudFront.
I also wrote that I would come back to the infrastructure as code for static website hosting topic and elaborate on how it can be achieved with CloudFormation templates and CloudFormation stacks in AWS. Here are ready-to-use templates for three alternatives; from the simpler S3 only solution to the more advanced S3 with CloudFront and Lambda@Edge architecture.
Note. I deliberately left out the DNS part since not everyone manages their DNS records in Route53.
The simplest solution, which is well described in the Setting up a Static Website Using a Custom Domain walkthrough, leverages on S3 only. AWS does not however provide with a CloudFormation template to create the S3 bucket so follow the link to Github and have a look at the one I uploaded there.
your domain name
an S3 bucket with that name will be created if available. If not, static website hosting with S3 only will not be possible for that particular domain and the solution is to use S3 with CloudFront.
Limiting access to the S3 bucket from the CloudFront distribution only with Origin Access Identity comes with a price though as CloudFront must use the S3 REST endpoint–instead of the HTTP endpoint–which does not support redirection to default index pages.
Last but not least, the S3 + CloudFront solution is sometimes not enough if you configured, for example, your favorite static site generator to produce clean URLs. While clean URLs are not an issue when served directly out of an S3 bucket, they do not work seamlessly with CloudFront as Ronnie Eichler very well explained on the AWS Compute blog back in October 2017.
For example, I generate–using Hugo–this blog with clean URLs and need to rewrite the path to all RESTful like resources in order for CloudFront to fetch a default index object from S3–done by appending index.html or /index.html to all requests that are not for objects with extension like images and scripts.
Again, have a look at the template I pushed to Github; it is the one I use to provision the AWS resources needed to serve this blog.
Before you go ahead and create a CloudFormation stack using my template make sure you select the us-east-1 region since “Lambda@Edge functions can now be authored in US East (N. Virginia), and will be replicated globally for invocation in response to CloudFront events.” - see Lambda@Edge now Generally Available.
your domain name
the Amazon Resource Name (ARN) of the SSL certificate required to deliver content over HTTPS using your own domain name (instead of the CloudFront distribution domain name)
a CloudFront distribution price class mapping to the locations of the edge locations your content will be delivered from (see Amazon CloudFront Pricing)
an S3 bucket with a semi-random name
a CloudFront distribution
an Origin Access Identity
an S3 bucket policy that grants the CloudFront distribution read access objects in the S3 bucket
a Lambda function to perform URI rewriting
an IAM execution role for the Lambda function
The template also glues together the Lambda function with the CloudFront distribution in order to intercept–the CloudFront distribution triggers the Lambda function–requests made to the origin (origin request event) and rewrite the requested object path.
Make no mistake
Whether you go for solution 1, 2 or 3, keep in mind that a quick and secured architecture is just something you must have. Technicalities always should remain invisible to the users. Only content is king and will always be.
I was web browsing through the different taxi companies operating in Stockholm this morning when I eventually ended up on taxi020.se which responded with an HTTP 404 error. As I read the code and message for that error my search for waiting time at the airport terms and conditions got all of a sudden more interesting.
Taxi 020 merged with Sverigetaxi in 2016 and is now part of the Cabonline Group.
404 Not Found
For anyone familiar with the Amazon Web Services (AWS) ecosystem the following screenshot of the HTTP 404 error message gives away a couple of hints on the website’s underlying infrastructure.
taxi020.se is served out of an S3 bucket (and therefore is hosted on AWS)
The S3 bucket does not exist (and might be up for grabs)
The missing S3 bucket should be named taxi020.se
I obviously needed to try something out and signed in to the AWS console.
Let’s take over
Taking over (sort of) taxi020.se was just a few clicks away and soon I had:
an S3 bucket named taxi020.se in the eu-west-1 region (most probable choice)
enabled static website hosting and configured requests redirection to this very blog
I could now just point my web browser to https://taxi020.se and follow the network requests to see all this in action.
What did go wrong?
Whether the missing S3 bucket was the result of a manual mistake by a user with excessive privileges, of an automated deployment gone wrong–a CloudFormation template lacking a DeletionPolicy: Retain on the S3 bucket resource for example–or the result of something completely different is for the engineers at Cabonline Technologies to figure out. I’m just throwing a few educated guesses here and some ideas on how the infrastructure should have been provisioned.
Do NOT point a DNS record directly to a S3 static website endpoint Bucket names are globally unique.
According to the Setting up a Static Website Using a Custom Domain walkthrough, “the bucket name must match the name of the website that you are hosting.” That can be problematic if for some reason a bucket named after the domain you want to host a static website for already exists in AWS. Bucket names being globally unique it is very likely that someone else has already created a bucket with the name you need (your domain name) and… there is nothing you can do about it.
Use a Content Delivery Network (CDN) like Amazon CloudFront Serving content directly out of S3 is not optimal.
Serving content directly out of an S3 bucket located in a specific AWS region is not optimal from a user experience point of view. Each user requesting content from taxi020.se (most of them probably located in Sweden) will have to suffer from the request/response round trip to Ireland (the eu-west-1 AWS region) meaning extra latency and slower load times.
It is not optimal from a cost perspective either since each user will, for each HTTP request, fetch all of the webpage’s resources (images, css files, …) directly from the S3 bucket which translates to many GET operations to the S3 service (priced per 1,000 requests) and unecessary data transfer (priced per GB) out from S3 and to Internet.
Part 1 One can serve a static website directly out of S3 and have such a solution up and running in a matter of minutes–given that the bucket can be created–but enabling CloudFront does not require much extra work and works regardless of the S3 bucket’s name:
Create an S3 bucket (or keep the existing one and disable Static website hosting) with a name that makes sense but does not have to match the domain name
Create a CloudFront distribution for the domain…
… with an origin pointing to the S3 bucket endpoint
Part 2 To further improve the solution, requests to the S3 bucket should only be allowed via the CloudFront distribution so that users cannot retrieve objects directly from S3 and bypass the content delivery network:
Create a CloudFront Origin Access Identity
Enable the Restrict Bucket Access option on the CloudFront distribution S3 origin using the Origin Access Identity created
Create/Replace the S3 bucket policy to only allow the CloudFront distribution to get objects from the bucket
Limiting access to the S3 bucket from the CloudFront distribution only with Origin Access Identity comes with a price though as CloudFront must use the S3 REST endpoint–instead of the HTTP endpoint–which does not support redirection to default index pages (see part 2 of this blog series).
Part 3 Last but not least, point the DNS record to the CloudFront distribution.
Template your infrastructure and forget about names Infrastructure as code to the rescue.
Even though this more secure and cost effective infrastructure is only a few more clicks away from the straight-out-of-an-S3-bucket kind of static website hosting it is a bit more cumbersome to create and maintain.
There obviously is a solution to help deal–create, update and delete–with the needed resources and make sure they work together nicely: infrastructure as code also known as CloudFormation templates–and stacks–in AWS (see part 2 of this blog series).
Taxi020.se is now www.sverigetaxi.se
While this was not the hack of the year it once again outlines the importance of securing the resources needed to host static assets on Amazon Simple Storage Service (S3) and I hope some of the improvements, if not all, I described in this blog entry make sense.
The guys at Taxi 020 solved the issue after a couple of hours and before I could publish this blog post. https://taxi020.se is now redirected to https://www.sverigetaxi.se/ which points to… a CloudFront distribution.
Unless you’ve been living under a rock since… last week, you’d know about the AWS region coming to the Nordics. The new EU (Stockholm) region will be operational in 2018 and will have three availability zones — as of today only the EU (Ireland) region also has three AZs in Europe — located in Katrineholm, Västerås and Eskilstuna.
For over a decade, we’ve had a large number of Nordic customers building their businesses on AWS because we have much broader functionality than any other cloud provider, a significantly larger partner and customer ecosystem, and unmatched maturity, reliability, security and performance. The Nordic’s most successful startups, including iZettle, King, Mojang, and Supercell, as well as some of the most respected enterprises in the world, such as IKEA, Nokia, Scania, and Telenor, depend on AWS to run their businesses, enabling them to be more agile and responsive to their customers. An AWS Region in Stockholm enables Swedish and Nordic customers, with local latency or data sovereignty requirements, to move the rest of their applications to AWS and enjoy cost and agility advantages across their entire application portfolio.
Today, I am happy to be able to tell you that we are planning to open up an AWS Region in Stockholm, Sweden in 2018. This region will give AWS partners and customers in Denmark, Finland, Iceland, Norway, and Sweden low-latency connectivity and the ability to run their workloads and store their data close to home.
Today, I am very excited to announce our plans to open a new AWS Region in the Nordics! The new region will give Nordic-based businesses, government organisations, non-profits, and global companies with customers in the Nordics, the ability to leverage the AWS technology infrastructure from data centers in Sweden. The new AWS EU (Stockholm) Region will have three Availability Zones and will be ready for customers to use in 2018.
This is obviously great news for Nordic-based customers as an AWS EU (Stockholm) region will, among other things, give them lower-latency connectivity and the possibility to physically store data in-country. Happy?
It has been almost a month since I came back from the fifth edition of re:Invent—the main Amazon Web Services conference held in Las Vegas each year—and I take it upon myself to go ahead and bother you with a take-away post heavily influenced by my personal interest in the Internet of Things and, to a higher degree, smart home technologies.
Have an Echo Dot on the house
With an Amazon Echo Dot for swag to the 30,000+ participants, an Alexa smart home booth in the expo hall and sessions with names like Capital One, Intel, Boston Children’s Hospital there was no doubt Amazon was, from the conference first day to the last, going to focus and ramp up its promotional efforts on its voice-activated speakers and the known or to be announced services that power them—one could not help but notice the release of the Google Home the same very month.
The Amazon Echo, its little brothers and sisters would be useless pieces of hardware (low-quality portable speakers at best) if they did not have skills and did not get new skills on a daily basis: with more than 4,000 skills the Alexa ecosystem is rapidly growing. These capabilities allow customers to interact with their devices using voice commands and follow, long story short, a basic request (you invoke a skill and formulate an intent) - response (Alexa replies) pattern with lots of magic in between.
Magic? Learn a few tricks
Getting started with the Alexa Skills Kit is quite straight forward and having an end-to-end simple but working Alexa custom skill a matter of minutes (see lroguet/amzn-alexa-skill-demo on GitHub for a Node.js skill template).
An Alexa skill can be hosted either as a web service or on AWS Lambda and I would suggest you look into the second option to begin with. Again, Amazon Web Services is giving the Alexa ecosystem the attention it (finally) deserves and announced a new partner program: the Alexa service delivery program for partners, and not directly related to Alexa I must admit, a new certification for partners: the IoT competency.
If you are unsure where to begin there are plenty of professional services firms in the Amazon Web Services Partner Network (APN) that could help you get started and Nordcloud—my employer as I write these lines—is one of them. May I suggest, for example, you begin with one of the AWS instructor-led trainings I will deliver in Copenhagen, Helsinki, Stockholm or Oslo in 2017?
Amazon Lex is a new service for building conversational interfaces using voice and text. The same conversational engine that powers Alexa is now available to any developer, making it easy to bring sophisticated, natural language ‘chatbots’ to new and existing applications. The power of Alexa in the hands of every developer, without having to know deep learning technologies like speech recognition, has the potential of sparking innovation in entirely new categories of products and services. - Bringing the Magic of Amazon AI and Alexa to Apps on AWS
While I have not had the time to play with any of these services yet, I already have a few home project ideas but also feature ideas to try out with customers as soon as possible next year!
Can we get some more please!
Participants are now back to their respective employers and some of them already looking into adding voice—starting with an Alexa skill—as yet another user interface to their product(s). While this looks exciting and likely to make life at the office a bit more fun on paper, a number of teams already came back to reality as they realized they cannot get their hands on hardware.
That’s right. Amazon’s Alexa-enabled devices are currently only available for delivery in the U.S and in the U.K which leaves out 194 countries (if you consider Taiwan a country). You might argue that one could develop a skill without dedicated hardware and use a web-based interface like Echosim.io to interact with Alexa. Granted. But how sexy is that in a demo to other teams or, more importantly, a demo to those who have the power (read money) to turn prototypes into products?
One would think Amazon Web Services had considered the issue but as far as I know even branch offices—in Scandinavia at least—cannot help acquire hardware. Customers are on their own and left with nothing to work with: quite an interesting situation the Earth’s Most Customer-Centric Company has put itself into.
So. Would you marry me?
Amazon definitely put a lot of emphasis on voice recognition—whether through Alexa enabled devices or purely as services of the AI platform—but there are still a few things to iron out before the technology gets real traction. Availability is, as far as I, and a few customers of mine I talked to since the event, am concerned, a major issue that needs to be addressed very soon or Alexa will not get much more than the 250,000 marriage proposals she has had so far.
Unless Echo, Echo Dot and the whole gang are released globally the relationship with the 194 other countries might just keep on being what it is today, somewhat distant. Or as Alexa would put it: “Let’s just be friends”.
… will most likely be available all over the interweb within seconds. I will therefore spare you (and me) all the low quality posts and will make the most of the sessions, after hours events and enjoy my first re:Invent instead.
Or maybe I actually will push fresh content to my online notes. Stay tuned.
A few weeks ago I decided it was time to deal with one of my 99 first world problems and simplify how I interact with the connected objects I tend to scatter around the house.
So I wasted a couple of hours searching the interweb - I obviously ended up watching Youtube videos too many times - and found a rather interesting open sourced project to help control these devices but also track their states and automate some: Home Assistant
Home Assistant is an open-source home automation platform running on Python 3. Track and control all devices at home and automate control. Installation in less than a minute.
But while the folks behind Home Assistant do a terrific job I soon realized (pretty much right after the next version came out) the Raspberry Pi compatible Docker image was not built and pushed to the public repository during the release process (no complaints here, the folks behind Home Assistant do a terrific job already).
So how do I make sure I always run the latest version - because now I have to always run the latest version of that tool I didn’t know existed a few days ago right? - within an hour following a new release?
Open a code editor
If you want to build your own Docker image with Home Assistant you could have a look at the bash script I wrote otherwise you’re just a Docker pull away from running one of my automated builds pushed to lroguet/rpi-home-assistant.
Please have a look at the Github project for more detailed instructions.
And here I am. With a sweet Home Assistant dashboard.
My Raspberry Pi compatible Docker images with Home Assistant only have the standard Python package installed. If you’re planning on integrating your Z-Wave devices you’d either have to do some work on your own or leave a comment below. If I’m not back to Youtube videos I could eventually give it a try.
If you read my previous post you should know that fourteenislands.io is served by a Nginx web server (Docker) running on a Raspberry Pi. But what started as a sandbox environment to host a few static pages is getting busier everyday and I, among other things, needed to host a couple of RESTful web APIs on that Raspberry Pi (on a different domain name).
Automated Nginx reverse proxy for Docker
I’ll spare you with the details as to why a reverse proxy and how to automatically generate reverse proxy configuration when ‘backend’ Docker containers are started and stopped and suggest you read this interesting post by Jason Wilder.
Jason’s nginx-proxy targets the X86 architecture and cannot be used as is on a Raspberry Pi. You could use rpi-nginx-proxy instead which is the equivalent I published for the ARM architecture.
So, let’s start a proxy.
From a terminal on your RPi open your favorite text editor and save the following as docker-compose.yml
# docker-compose.yml for the nginx reverse proxynginx-proxy:container_name:nginx-proxyimage:lroguet/rpi-nginx-proxy:latestports:-"80:80"volumes:-/var/run/docker.sock:/tmp/docker.sock:ro
Start the Nginx reverse proxy ‘frontend’ container with the following command
$ docker-compose run -d --service-ports nginx-proxy
You don’t really need Docker Compose here and could run the same ‘frontend’ container with that one-liner:
$ docker run -d -p 80:80 -v /var/run/docker.sock:/tmp/docker.sock:ro lroguet/rpi-nginx-proxy
That was easy, wasn’t it?
Serving a domain (fourteenislands.io for example)
Now that the reverse proxy is up and running we can just start whatever ‘backend’ container we want to serve content for a specific domain (note the VIRTUAL_HOST environment variable in the docker-compose.yml file below).
Head back to your RPi terminal and favorite text editor: save the following as docker-compose.yml
# docker-compose.yml for lab.fourteenislands.io# Assumes your generated/copied static site is in /home/pi/volumes/nginxweb:container_name:nginximage:lroguet/rpi-nginx:latestenvironment:-VIRTUAL_HOST=fourteenislands.iovolumes:-/home/pi/volumes/nginx:/var/www/html
Start the Nginx ‘backend’ container with the following command
$ docker-compose run -d --service-ports web
Using Docker Compose makes more sense here but is obviously not required.
Starting a ‘backend’ container triggers a Docker start event, the reverse proxy configuration is generated and the ‘frontend’ Nginx reloaded.
When a ‘backend’ container is stopped or dies, a Docker die/stop event is triggered, the reverse proxy configuration is generated and the ‘frontend’ Nginx reloaded.
As always, have fun and if you have any questions, please leave a comment.
Update. fourteenislands.io is not currently served from a Raspberry Pi but from an Amazon S3 bucket.
Hosting a static webiste on a Raspberry Pi (RPi from now on) is quite straight forward. Serving that same website from a Dockerized Nginx HTTP server -on that same RPi- is a bit more interesting. Sure.
But what if the RPi decides to take the day off?
This post is mainly for students who attended one of my Architecting on AWS or System Operations on AWS sessions. It contains instructions, links to templates, tools and scripts to get started with Amazon Route 53 DNS failover between a primary site hosted on a RPi and a secondary site hosted on Amazon S3 & Amazon CloudFront. It is however not an exhaustive list of all the steps required for such a setup: use it at your own risk.
First you may want to give your RPi some HypriotOS love. There are other Linux distributions available for the Raspberry Pi but HypriotOS has, in my opinion, the best Docker support at the moment.
The page you’re reading right now (the whole fourteenislands.io domain for that matter) is written in Markdown and converted to a static site (HTML) with Hugo. Please go (no pun intended) have a look and come back here when you’ve written that novel of yours. I’ll wait.
Done? Good. You now want to serve the site you’ve generated directly from your RPi. We’ll do that with Nginx running in a Docker container. I’ve prepared some for you.
From a terminal on your RPi open your favorite text editor and save the following as docker-compose.yml
# docker-compose.yml# Assumes your generated/copied static site is in /home/pi/volumes/nginxnginx:container_name:nginximage:lroguet/rpi-nginx:latestports:-"80:80"volumes:-/home/pi/volumes/nginx:/var/www/html
Start the Nginx container with the following command
$ docker-compose run -d --service-ports nginx
Amazon Route 53
In this section I assume you already have created an S3 bucket to store a copy of your static website and set up a CloudFront distribution to distribute that content (see Using CloudFront with Amazon S3 for instructions). I also assume you have the AWS CLIinstalled and configured.
Create DNS records and an health check with CloudFormation
First we want to create a primary record, an health check for that record and a secondary record to failover to when the primary record is not available.
Create a CloudFormation stack from that template directly in the AWS console or save the template locally, hit the terminal and run the following script(replace the ParameterValue values with your own):
Once the health check, primary and secondary records have been created using the above CloudFormation template, the health check IPAddress and the primary record ResourceRecords should be automatically (hourly job on the RPi for example) updated with the following script.
Altitude is a simple widget displaying the current altitude. The widget does not rely on the built-in barometric altimeter but retrieves the altitude from a third party elevation service (such as the Google Elevation API) based on the GPS position.
For feedback, bug reports and feature requests please leave a message in the comments section at the bottom of the page.
0.6.1 Switched back to HTTP since Garmin Connect Mobile can’t handle Amazon Web Services certificates. 0.6 Switched to HTTPS. Fixed Monkey C version compatibility check. Built with SDK 1.2.11. 0.5.1 Added support for fēnix 3 HR and vívoactive HR. 0.5 Moved backend to Amazon Web Services (Route 53, CloudFront, API Gateway & Lambda). Built with SDK 1.2.6. 0.4 Added support for fēnix™ 3 and D2 Bravo. 0.3 Added check for incompatible firmware. Built with SDK 1.1.2. 0.2 Added support for Forerunner® 920XT. 0.1 First release. Supports meters and feet depending on device settings. epix™ & vívoactive™.
Since version 0.5 the Altitude widget relies on a truly serverless backend powered by Amazon API Gateway & AWS Lambda. Below is a Datadog dashboard showing some of the “behind the scenes” metrics.
I only own a Garmin vívoactive™ watch and can only test the Altitude widget on other Garmin models through the Connect IQ simulator. I, therefore, cannot guarantee that the Altitude widget runs smoothly on the Forerunner® 920XT, epix™, fēnix™ 3, D2 Bravo or any other Connect IQ compatible Garmin device.
The AWS Certified SysOps Administrator – Associate exam validates technical expertise in deployment, management, and operations on the AWS platform. Exam concepts you should understand for this exam include: deploying, managing, and operating scalable, highly available, and fault tolerant systems on AWS, migrating an existing on-premises application to AWS, implementing and controlling the flow of data to and from AWS, selecting the appropriate AWS service based on compute, data, or security requirements, identifying appropriate use of AWS operational best practices, estimating AWS usage costs and identifying operational cost control mechanisms. - Amazon Web Services
When working with Amazon RDS, by default AWS is responsible for implementing which two management-related activities? (Pick 2 correct answers)
A. Importing data and optimizing queries B. Installing and periodically patching the database software C. Creating and maintaining automated database backups with a point-in-time recovery of up to five minutes D. Creating and maintaining automated database backups in compliance with regulatory long-term retention requirements
Answers. B & C
You maintain an application on AWS to provide development and test platforms for your developers. Currently both environments consist of an m1.small EC2 instance. Your developers notice performance degradation as they increase network load in the test environment. How would you mitigate these performance issues in the test environment?
A. Upgrade the m1.small to a larger instance type B. Add an additional ENI to the test instance C. Use the EBS optimized option to offload EBS traffic D. Configure Amazon Cloudwatch to provision more network bandwidth when network utilization exceeds 80%
Per the AWS Acceptable Use Policy, penetration testing of EC2 instances:
A. may be performed by the customer against their own instances, only if performed from EC2 instances. B. may be performed by AWS, and is periodically performed by AWS. C. may be performed by AWS, and will be performed by AWS upon customer request. D. are expressly prohibited under all circumstances. E. may be performed by the customer against their own instances with prior authorization from AWS.
You have been tasked with identifying an appropriate storage solution for a NoSQL database that requires random I/O reads of greater than 100,000 4kB IOPS. Which EC2 option will meet this requirement?
A. EBS provisioned IOPS B. SSD instance store C. EBS optimized instances D. High Storage instance configured in RAID 10
Instance A and instance B are running in two different subnets A and B of a VPC. Instance A is not able to ping instance B. What are two possible reasons for this? (Pick 2 correct answers)
A. The routing table of subnet A has no target route to subnet B B. The security group attached to instance B does not allow inbound ICMP traffic C. The policy linked to the IAM role on instance A is not configured correctly D. The NACL on subnet B does not allow outbound ICMP traffic
Answers. B & D
Your web site is hosted on 10 EC2 instances in 5 regions around the globe with 2 instances per region. How could you configure your site to maintain site availability with minimum downtime if one of the 5 regions was to lose network connectivity for an extended period of time?
A. Create an Elastic Load Balancer to place in front of the EC2 instances. Set an appropriate health check on each ELB. B. Establish VPN Connections between the instances in each region. Rely on BGP to failover in the case of a region wide connectivity outage C. Create a Route 53 Latency Based Routing Record Set that resolves to an Elastic Load Balancer in each region. Set an appropriate health check on each ELB. D. Create a Route 53 Latency Based Routing Record Set that resolves to Elastic Load Balancers in each region and has the Evaluate Target Health flag set to true.
You run a stateless web application with the following components: Elastic Load Balancer (ELB), 3 Web/Application servers on EC2, and 1 MySQL RDS database with 5000 Provisioned IOPS. Average response time for users is increasing. Looking at CloudWatch, you observe 95% CPU usage on the Web/Application servers and 20% CPU usage on the database. The average number of database disk operations varies between 2000 and 2500. Which two options could improve response times? (Pick 2 correct answers)
A. Choose a different EC2 instance type for the Web/Application servers with a more appropriate CPU/memory ratio B. Use Auto Scaling to add additional Web/Application servers based on a CPU load threshold C. Increase the number of open TCP connections allowed per web/application EC2 instance D. Use Auto Scaling to add additional Web/Application servers based on a memory usage threshold
Answers. A & B
Which features can be used to restrict access to data in S3? (Pick 2 correct answers)
A. Create a CloudFront distribution for the bucket. B. Set an S3 bucket policy. C. Use S3 Virtual Hosting. D. Set an S3 ACL on the bucket or the object. E. Enable IAM Identity Federation.
Answers. B & D
You need to establish a backup and archiving strategy for your company using AWS. Documents should be immediately accessible for 3 months and available for 5 years for compliance reasons. Which AWS service fulfills these requirements in the most cost effective way?
A. Use StorageGateway to store data to S3 and use life-cycle policies to move the data into Redshift for long-time archiving B. Use DirectConnect to upload data to S3 and use IAM policies to move the data into Glacier for longtime archiving C. Upload the data on EBS, use life-cycle policies to move EBS snapshots into S3 and later into Glacier for long-time archiving D. Upload data to S3 and use life-cycle policies to move the data into Glacier for long-time archiving
What does the IAM policy allow? (Pick 3 correct answers)
A. The user is allowed to read objects from all S3 buckets owned by the account B. The user is allowed to write objects into the bucket named ‘corporate_bucket’ C. The user is allowed to change access rights for the bucket named ‘corporate_bucket’ D. The user is allowed to read objects in the bucket named ‘corporate_bucket’ but not allowed to list the objects in the bucket E. The user is allowed to read objects from the bucket named ‘corporate_bucket’