12.2 Whizlabs - Diagnosis Test
Last updated
Was this helpful?
Last updated
Was this helpful?
Key-pair is used to encrypt and decrypt login information to EC2 instances. Linux instances have no password, and you use a key pair to login using SSH (Security Shell Protocol) with port 22. With Windows instances, you use a key pair to obtain the administrator password and then login using RDP (Remote Desktop Protocol) with port 3389.
For DynamoDB tables, what are scenario's in which you would want to enable Cross-Region Replication? Based on the AWS Documents, the answer should include:
Efficient disaster recovery
Faster reads (read from the closest data center)
Easier traffic management (distribute the read workload across tables)
Easy regional migration (creating a read replica in a new region, and then promoting the replica to be a master, you migrate your app to that region more easily)
Live data migration (to move a table from one region to another, you can create a replica of the table from the source region in the destination region, when the tables are in sync, you can switch your app to write to the destination region)
In VPC's with private and public subnets, the web servers should ideally be launched in which? Answer: private subnets. With AWS, when talking ideally, it would always be highly available and fault tolerant with the best security possible. So I would expect the web servers (EC2 instances most likely) to be in a private subnet and using a ELB (or reverse proxy) which is in the public subnet allowing for the access from the outside Internet. only on whatever port the application is allowed through the ELB. See the architecture below: (WAF - Web Application Firewall)
Which services are suitable for storing session state data? Answer includes AWS RDS, DynamoDB, ElastiCache.
What is mandatory when defining a CloudFormation template? Answer is Resources. Resource is required to specify the stack resources and their properties, such as Amazon EC2 instance or an Amazon S3 bucket. You can refer to resources in the Resources and Output sections of the template.
AWS Import/Export accelerates transferring data between the AWS cloud and portable storage devices. AWS Import/Export is a good choice if you have 16 Terabytes or less of data to import into Amazon S3, Amazon EBS, or Glacier. You can also export data from S3 with AWS Import/Export. However, before Glacier data can be exported, it needs to be restored to Amazon S3 using the S3 Life-cycle Restore feature.
Spot Instances are normally used for stateless web services, image rendering, big data analytics, large volumes data processing, batch processing (e.g. processing large backlog), and massively parallel computations. Since Spot Instances typically cost 50 - 90% less, you can increase compute capacity by 2 - 10 times within the same budget.
Snapshots can be incremental backups of EBS volumes. It is easy to create a snapshot from a volume while the instance is running and the volume is in use.
I2 instances are optimized to deliver tens of thousands of low-latency, random IO operations per second (IOPS) to applications. They are well suited for the following scenarios:
NoSQL databases (Cassandra or MongoDB)
Clustered databases
OLTP (Online Transaction Processing) systems
A Placement Group is a logical grouping of instances within a single AZ. Placement Groups are recommended for applications that benefit from low network latency, high network throughput, or both. To provide the lowest latency, and the highest packet-per-second network performance for your placement group, choose an instance type that supports enhanced networking.
Go to the Billing dashboard of your AWS account console, you can configure the set of elements: Bills, Cost Explorer, Budgets, Reports, etc.
Read Replicas are available in Amazon RDS for MySQL, MariaDB, and PostgreSQL. When you create a read replica, you specify an existing DB Instance as the source, RDS takes a snapshot of the source instance and creates a read-only instance from the snapshot. For the above three DBs, RDS uses those engines' native asynchronous replication to update the read replica whenever there is a change to the source DB instance. Not support read replica in a second region for PostgreSQL.
VM Import/Export enables customers to import VM images in order to create Amazon EC2 instances. Customers can also export previously imported EC2 instances to create VMs. Customers can use VM Import/Export to leverage their previous investments in building VMs by migrating their VMs to Amazon EC2.
Which programming languages have an officially supported AWS SDK? Answer includes: Java, C++, PHP, .NET, Python, Go, Node.js, Ruby, AWS Mobile SDK.
You can configure the Amazon SQS message retention period to a value from 1 minute to 14 days, and the default is 4 days. Once the message retention limit is reached, your messages are automatically deleted.
Always use IAM Roles for accessing AWS resources from EC2 instances. IAM Roles are designed so that your application can securely make API requests from your instances, without requiring you to manage the security credentials that the applications use. Instead of creating and distributing your AWS credentials, you can delegate permission to make API requests using IAM Roles.
When you cannot connect to a DB instance on AWS, the reasons probably are:
The DB is still being created
Your local firewall is stopping the communication traffic
The security group for the DB are not properly configured
Amazon RDS provides high availability and failover support for DB instances using Multi-AZ deployments. Amazon RDS uses several different technologies to provide failover support. Multi-AZ deployments for Oracle, PostgreSQL, MySQL, and MariaDB DB instances use Amazon's failover technology. SQL Server DB instances use SQL Server mirroring. Amazon Aurora instances stores copies of the data in a DB cluster across multiple Availability Zones in a single AWS Region, regardless of whether the instances in the DB cluster span multiple Availability Zones.
Your URL (Endpoint) of your S3 Static Website is "http://[bucketname].s3-website-[Region].amazonaws.com". When you configure a bucket for website hosting, the website is available via the region-specific website endpoint. Website endpoints are different from the endpoints where you send REST API requests.
Having EC2 instances hosting your applications in multiple subnets, hence multiple AZ's and placing them behind an ELB is the basic building block of a high availability architecture in AWS. For more elastic and scalable, add Auto-scaling.
ELB(with multi-AZ) + EC2(with multi-AZ) + Auto-scaling => basic building block of HA + elastic + scalable architecture in AWS.
The ELB is best used for EC2 instances multiple AZs. You cannot use ELB for distributing traffic across Regions.
When you are create an Auto-Scaling group, you must specify a Launch Configuration. You can specify your launch configuration with multiple auto-scaling groups. However, you can only specify one launch configuration for an auto-scaling group at a time, and you cannot modify a launch configuration after you've create it. Therefore, if you want to change the launch configuration for your auto-scaling group, you must create a new launch configuration and then update your auto-scaling group with the new launch configuration.
Auto Scaling helps you ensure that you have the correct number of Amazon EC2 instances available to handle the load for your application. You create collections of EC2 instances, called Auto Scaling groups. You can specify the minimum number of instances in each Auto Scaling group, and Auto Scaling ensures that your group never goes below this size. You can specify the maximum number of instances in each Auto Scaling group, and Auto Scaling ensures that your group never goes above this size. If you specify the desired capacity, either when you create the group or at any time thereafter, Auto Scaling ensures that your group has this many instances. If you specify scaling policies, then Auto Scaling can launch or terminate instances as demand on your application increases or decreases.
Auto-scaling components:
Auto-scaling groups: Your EC2 instances are organized into groups so that they can be treated as a logical unit for the purposes of scaling and management. When you create a group, you can specify its minimum, maximum, and desired number of EC2 instances.
Launch Configuration: Your groups uses a launch configuration as a template for its EC2 instances. When you create a launch configuration, you can specify information such as the AMI ID, instance type, key pair, security groups, and block device mapping for your instances.
Scaling Plans: a scaling plan tells Auto-scaling when and how to scale. There are 4 methods: Maintain current instance levels at all time, Manual scaling, scheduled scaling, dynamic scaling.
The following scale-out events direct the Auto-Scaling group to launch EC2 instances and attach them to the group:
Scheduled Scaling (Scaling based on schedule. Scaling by schedule.) Scaling actions will be performed automatically as a function of time and date.
Dynamic Scaling (Scaling based on demand. Scaling by policy.) You could have two policies, one for scaling out, one for scaling in. Three Scaling Policy Types:
Simple scaling—Increase or decrease the current capacity of the group based on a single scaling adjustment.
Step scaling—Increase or decrease the current capacity of the group based on a set of scaling adjustments, known as step adjustments, that vary based on the size of the alarm breach.
Target tracking scaling—Increase or decrease the current capacity of the group based on a target value for a specific metric. This is similar to the way that your thermostat maintains the temperature of your home – you select a temperature and the thermostat does the rest.
Manual Scaling. Specify only the change in the maximum, minimum, or desired capacity of your Auto Scaling group.
Maintain current instance levels at all times. To maintain the current instance levels, Auto Scaling performs a periodic health check on running instances within an Auto Scaling group. When Auto Scaling finds an unhealthy instance, it terminates that instance and launches a new one.
Attach EC2 Instances to Your Auto Scaling Group (Manual Scaling). Auto-scaling provides you with an option to enable Auto-scaling for one or more EC2 instances by attaching them to your existing Auto-scaling group. After the instances are attached, they become a part of the Auto-scaling group. The instance that you want to attach must meet the following criteria:
The instance is in the running state
The AMI used to launch the instance must still exist
The instance is not a member of another Auto-scaling group
The instance is the same AZ as the Auto-scaling group
If the Auto-scaling group has an attached load balancer, the instance and the load balancer must both be in EC2-Classic or the same VPC. If the Auto-scaling group has an attached target group, the instance and the Application Load Balancer must both be in the same VPC.
If you attach an instance to an Auto Scaling group that has an attached load balancer, the instance is registered with the load balancer. If you attach an instance to an Auto Scaling group that has an attached target group, the instance is registered with the target group.
Attaching a Load Balancer to Your Auto Scaling Group. Amazon EC2 Auto Scaling integrates with Elastic Load Balancing to enable you to attach one or more load balancers to an existing Auto Scaling group. After you attach the load balancer, it automatically registers the instances in the group and distributes incoming traffic across the instances. When you attach a load balancer, it enters the Adding state while registering the instances in the group. After all instances in the group are registered with the load balancer, it enters the Added state. After at least one registered instance passes the health checks, it enters the InService state. After the load balancer enters the InService state, Amazon EC2 Auto Scaling can terminate and replace any instances that are reported as unhealthy. Note that if no registered instances pass the health checks (for example, due to a misconfigured health check), the load balancer doesn't enter the InService state, so Amazon EC2 Auto Scaling wouldn't terminate and replace the instances. When you detach a load balancer, it enters the Removing state while deregistering the instances in the group. Elastic Load Balancing sends data about your load balancers and EC2 instances to Amazon CloudWatch. CloudWatch collects performance data for your resources and presents it as metrics. After you attach a load balancer to your Auto Scaling group, you can create scaling policies that use Elastic Load Balancing metrics to scale your application automatically.
When you make a change to the Security Groups or Network ACLs, they (rules) are applied immediately.
By default, instances that you launch into a virtual private cloud (VPC) can't communicate with your own network. You can enable access to your network from your VPC by attaching a virtual private gateway to the VPC, creating a custom route table, updating your security group rules, and creating an AWS managed VPN connection. Although the term VPN connection is a general term, in the Amazon VPC documentation, a VPN connection refers to the connection between your VPC and your own network. AWS supports Internet Protocol security (IPsec) VPN connections. A VPN connection consists of the following components:
Virtual Private Gateway. A virtual private gateway is the VPN concentrator on the Amazon side of the VPN connection. You create a virtual private gateway and attach it to the VPC from which you want to create the VPN connection.
Customer Gateway. A customer gateway is a physical device or software application on your side of the VPN connection. To create a VPN connection, you must create a customer gateway resource in AWS, which provides information to AWS about your customer gateway device.
The VPN tunnel comes up when traffic is generated from your side of the VPN connection. The virtual private gateway is not the initiator; your customer gateway must initiate the tunnels. If your VPN connection experiences a period of idle time (usually 10 seconds, depending on your configuration), the tunnel may go down. To prevent this, you can use a network monitoring tool to generate keepalive pings; for example, by using IP SLA.
Attach EC2 Instances to Your Auto-Scaling Group. Auto Scaling provides you with an option to enable Auto Scaling for one or more EC2 instances by attaching them to your existing Auto Scaling group. After the instances are attached, they become a part of the Auto Scaling group. The instance that you want to attach must meet the following criteria:
The instance is in the running state.
The AMI used to launch the instance must still exist.
The instance is not a member of another Auto Scaling group.
The instance is in the same Availability Zone as the Auto Scaling group.
If the Auto Scaling group has an attached load balancer, the instance and the load balancer must both be in EC2-Classic or the same VPC. If the Auto Scaling group has an attached target group, the instance and the load balancer must both be in the same VPC.
When you attach instances, Auto Scaling increases the desired capacity of the group by the number of instances being attached. If the number of instances being attached plus the desired capacity exceeds the maximum size of the group, the request fails.
If you attach an instance to an Auto Scaling group that has an attached load balancer, the instance is registered with the load balancer. If you attach an instance to an Auto Scaling group that has an attached target group, the instance is registered with the target group.
Linux AMI (Amazon Machine Image) use one of two types of virtualization: paravirtual (PV) or hardware virtual machine (HVM). The main difference between them is the way in which they boot and whether they can take advantage of special hardware extensions (CPU, Network, Storage) for better performance.
HVM AMIs are presented with a fully virtualized set of hardware and boot by executing the master boot record of the root block device of your image. This virtualization type provides the ability to run an operating system directly on top of a virtual machine without any modification, as if it were run on the bare-metal hardware. The Amazon EC2 host system emulates some or all of the underlying hardware that is presented to the guest. Unlike PV guests, HVM guests can take advantage of hardware extensions that provide fast access to the underlying hardware on the host system. Paravirtual guests traditionally performed better with storage and network operations than HVM guests because they could leverage special drivers for I/O that avoided the overhead of emulating network and disk hardware, whereas HVM guests had to translate these instructions to emulated hardware. Paravirtual guests can run on host hardware that does not have explicit support for virtualization, but they cannot take advantage of special hardware extensions such as enhanced networking or GPU processing. So traditionally, PV guests had better performance than HVM guests in many cases, but because of enhancements in HVM virtualization and the availability of PV drivers for HVM AMIs, this is no longer true.
When you need unknown capacity storage, durable storage, cost-efficient storage and scalable storage, choose S3.
Both of AWS CloudWatch and AWS Trusted Advisor can be used to monitor the resources utilization.
VPC Flow Logs is a feature that enables you to capture information about the IP traffic going to and from network interfaces in your VPC. Flow log data is stored using Amazon CloudWatch Logs. After you've create a flow log, you can view and retrieve its data in Amazon CloudWatch Logs.
AWS Direct Connect makes it easy to establish a dedicated network connection from your premises to AWS. Using AWS Direct Connect, you can establish private connectivity between AWS and your data center, office, or colocation environment, which in many cases can reduce your network costs, increase bandwidth throughput, and provide a more consistent network experience than Internet-based connections.
Cost Explorer is a free tool that you can use to view charts of your costs.
Amazon EBS Snapshots.
You can backup the data on your EBS volumes to Amazon S3 by taking point-in-time snapshots. Snapshots are incremental backups, which means that only the blocks on the device that have changed after your most recent snapshot are saved. This minimizes the time required to create the snapshot and saves on storage costs. When you delete a snapshot, only the data unique to that snapshot is removed. Each snapshot contains all of the information needed to restore your data (from the moment when the snapshot was taken) to a new EBS volume.
When you create an EBS volume based on a snapshot, the new volume begins as an exact replica of the original volume that was used to create the snapshot. The replicated volume loads data lazily in the background so that you can begin using it immediately. If you access data that hasn't been loaded yet, the volume immediately downloads the requested data from Amazon S3, and then continues loading the rest of the volume's data in the background.
Snapshots of EBS occur asynchronously; point-in-time snapshot is created immediately, but the status of the snapshots is pending until the snapshot is complete (when all of the modified blocks have been transferred to Amazon S3), which can take several hours for large initial snapshots or subsequent snapshots where many blocks have changed. While it is completing, an in-progress snapshot is not affected by ongoing reads and writes to the volume, so you can easily create a snapshot from a volume while the instance is running and the volume is in use.
How incremental backups work:
A NAT instance must be provisioned into a public subnet, and it must part of the private subnet's route table.
If you want a fleet of instances to be created with public IP addresses, you can enable auto-assign Public IP at subnet level, then all instances launched in that subnet will get a public IP address by default.
You can only have one Internet Gateway attached to your VPC at one time.
AWS Database Migration Service (DMS) helps you migrate databases to AWS quickly and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database. The AWS Database Migration Service can migrate your data to and from most widely used commercial and open-source databases.
You can use Amazon CloudWatch Logs to monitor, access, store and analyze your log files from Amazon EC2 instances, CloudTrail, Route53, and other sources. You can then retrieve the associated log data from CloudWatch Logs. The following services are used in conjunction with CloudWatch Logs: AWS CloudTrail, AWS IAM, Amazon Kinesis Data Streams, Amazon Lambda.