12.6 Whizlabs - Practice Test IV

  1. Elastic Network Interfaces (ENI). An elastic network interface (or network interface in some docs) is a logical networking component in a VPC that represents a virtual network card. You can create a network interface, attach it to an instance, detach it from an instance, and attach it to another instance. The attributes of a network interface follow it as it's attached or detached from an instance and reattached to another instance. When you move a network interface from one instance to another, network traffic is redirected to the new instance. You can also modify the attributes of your network interface, including changing its security groups and managing its IP addresses. Every EC2 instance in a VPC has a default network interface, called the primary network interface (eth0). You cannot detach the primary network interface from an instance. You can create and attach additional network interfaces.

  2. You can attach Elastic Network Interface (ENI) to an instance when it is running (hot attach), when it is stopped (warm attach), or when the instance is being launched (cold attach). An ENI is a virtual network interface that you can attach to an instance in a VPC. An ENI can have the following:

    1. a primary private IP address

    2. one or more secondary private IP addresses

    3. one elastic IP address per private IP address

    4. one public IP address, which can be auto-assigned to the elastic network interface for eth0 when you launch an instance.

    5. one or more security groups

    6. a MAC address

    7. a source/destination check flag

    8. a description

    When you click eth0, you can see the details:

  3. Storage Gateways: File Gateway, Stored Volumes Gateway, Cached Volumes Gateway, Tape Gateway. In the cached volume mode, your data is stored in Amazon S3 and a cache of the frequently accessed data is maintained locally by the gateway. With this mode, you can achieve cost savings on primary storage, and minimize the need to scale your storage on-premises, while retaining low-latency access to your most used data. In the stored volume mode, data is stored on your local storage with volumes backed up asynchronously as Amazon EBS snapshots stored in Amazon S3. This provides durable and inexpensive off-site backups. You can recover these backups locally to your gateway or in-cloud to Amazon EC2, for example, if you need replacement capacity for disaster recovery.

  4. AWS S3 Life-cycle policies.

  5. Access Logs for Your Elastic Load Balancer. Elastic Load Balancing provides access logs that capture detailed information about requests sent to your load balancer. Each log contains information such as the time the request was received, the client's IP address, latencies, request paths, and server responses. You can use these access logs to analyze traffic patterns and troubleshoot issues. Access logging is an optional feature of Elastic Load Balancing that is disabled by default. After you enable access logging for your load balancer, Elastic Load Balancing captures the logs and stores them in the Amazon S3 bucket that you specify as compressed files. You can disable access logging at any time. Elastic Load Balancing supports server-side encryption for access logs for your Application Load Balancer. Each access log file is automatically encrypted before it is stored in your S3 bucket and decrypted when you access it.

  6. To ensure that all objects uploaded to the bucket are set to public read, you can set permissions on the object to public read during upload, then you can configure the bucket policy to set all objects to public read.

  7. Characteristics of a standard reserved instance:

    1. It can be migrated across AZs, because you can migrate instances between AZs.

    2. It can be applied to instances launched by Auto-scaling.

    3. It can be used to lower costs.

    4. It is specific to instance family but instance type can be varied. When you create a reserved instance, you can see the instance type as an option.

  8. Instance user-data is the script that will run when the instance is launching. This can be accessed by the instance later on. Instance meta-data is the data about your instance that you can use to configure or manage the running instance.

  9. For SSH access, the protocol has to be TCP.

  10. To restrict the access of the content in your static S3 website, you can remove public read access and use signed URLs with expiry dates.

  11. Enhanced Networking and Placement Groups.

  12. AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet. Chef and Puppet are automation platforms that allow you to use code to automate the configurations of your servers. OpsWorks lets you use Chef and Puppet to automate how servers are configured, deployed, and managed across your Amazon EC2 instances or on-premises compute environments. OpsWorks has three offerings, AWS Opsworks for Chef Automate, AWS OpsWorks for Puppet Enterprise, and AWS OpsWorks Stacks. Look for the term "chef" or "recipes" or "cook books" and think OpsWorks.

  13. Controlling Which Auto Scaling Instances Terminate During Scale In. With each Auto Scaling group, you control when it adds instances (referred to as scaling out) or remove instances (referred to as scaling in) from your network architecture. When you configure automatic scale in, you must decide which instances should terminate first and set up a termination policy. You can also use instance protection to prevent specific instances from being terminated during automatic scale in. Totally, there are 3 Auto-scaling scaling in strategy. Auto-scaling allows for notification via SNS, so if that is enabled, it will send out the notification accordingly.

    1. Default Termination Policy. The default termination policy is designed to help ensure that your network architecture spans Availability Zones evenly.

    2. Customizing the Termination Policy.

    3. Instance Protection.

  14. Amazon S3 Event Notifications enable you to send alarts, run workflows, or perform other actions in response to changes in your objects stored in S3. Notifications can be sent via SNS, SQS, or to a Lambda function (depending on the bucket location). You can use S3 event notifications to set up triggers to perform actions including transcoding media files when they are uploaded, processing data files when they become available, and synchronizing S3 objects with other data stores.

  15. IAM Roles are global service that is available across all Regions. Creating an AMI of an instance cannot contain the Role of this instance into the AMI. You need to assign the existing IAM Role to the EC2 instances in new Region.

  16. AWS Directory Service. (Question 27.)

    1. AWS Directory Service provides multiple ways to use Amazon Cloud Directory and Microsoft Active Directory (AD) with other AWS services. Directories store information about users, groups, and devices, and administrators use them to manage access to information and resources. AWS Directory Service provides multiple directory choices for developers who want to use existing Microsoft AD or Lightweight Directory Access Protocol (LDAP)–aware applications in the cloud.

    2. You can choose directory services with the features and scalability that best meets your needs.

      1. If you need AD or LDAP for your applications in the cloud, the recommended AWS Directory Service options are:

        1. Select AWS Directory Service for Microsoft Active Directory (Enterprise Edition) if you need an actual Microsoft Active Directory in the AWS Cloud.

        2. Use AD Connector if you only need to allow your on-premises users to log in to AWS applications and services with their Active Directory credentials. You can also use AD Connector to join Amazon EC2 instances to your existing Active Directory domain.

        3. Use Simple AD if you need a low-scale, low-cost directory with basic Active Directory compatibility that supports Samba 4–compatible applications, or you need LDAP compatibility for LDAP-aware applications.

      2. Other needs see AWS docs...

  17. Multipart Upload. The Multipart upload API enables you to upload large objects in parts. You can use this API to upload new large objects to S3 bucket or make a copy of an existing object. If you are using the multi-upload option for S3, then you can resume on failure. The advantages of Multipart Upload:

    1. Improved throughput. You can upload parts in parallel to improve throughput.

    2. Quick recovery from any network issues. Smaller part size minimizes the impact of restarting a failed upload due to a network error.

    3. Pause and resume object uploads. You can upload object parts over time. Once you initiate a multipart upload there is no expiry; you must explicitly complete or abort the multipart upload.

    4. Begin an upload before you know the final object size. You can upload an object as you are creating it.

  18. S3 Bucket Server Access Logging. Server access logging provides detailed records for the requests made to a bucket. Server access logs are useful for many applications because they give bucket owners insight into the nature of requests made by clients not under their control. By default, Amazon Simple Storage Service (Amazon S3) doesn't collect server access logs. You can go to properties of your bucket and enable server access logging.

  19. If your workload in an S3 bucket routinely exceeds 100 PUT/LIST/DELETE requests per second or more than 300 GET requests per second then you need to add a random prefix to the key names of the objects.

  20. VPC Peering gives you a peering connection which is a networking connection between two VPCs that enables you to route traffic between them using private IP addresses. Instances in either VPC can communicate with each other as if they are within the same network. You can create a VPC peering connection between your own VPCs, or with a VPC in another AWS account within a single Region (actually, you can also have inter-region peering).

  21. AWS Elastic Beanstalk makes it even easier for developers to quickly deploy and manage applications in the AWS Cloud. Developers simply upload their application, and Elastic Beanstalk automatically handles the deployment details of capacity provisioning, load balancing, auto-scaling, and application health monitoring. Unlike other PaaS solutions, with AWS Elastic Beanstalk, developers retain full control over the AWS resources powering their application.

  22. AWS IoT Core is a managed cloud platform that lets connected devices easily and securely interact with cloud applications and other devices. IoT Core can support billions of devices and trillions of messages, and can process and route those messages to AWS endpoints and to other devices reliably and securely.

  23. After you create IAM users and passwords for each, users can sign in to the AWS Management Console for your AWS account with a special URL. By default, the sign-in URL for your account includes your account ID. You can create a unique sign-in URL for your account so that the URL includes a name instead of an account ID. By default, the format of the special URL is: http://[aws-account-ID-or-alias].signin.aws.amazon.com/console

  24. The encryption of EBS volume are supported on all EBS volume types (has nothing to do with instance types).

  25. Using AWS Security Token Service from an identity broker to issue short-lived AWS credential can be used to integrate AWS IAM with an on-premise LDAP directory service.

  26. Copy the AMI of your instances to other Region to implement Disaster Recovery.

  27. Cross-Region/Multi-AZ => DR/HA.

  28. The AMIs can differ from Region to Region, hence if you want to implement disaster recovery as a best practice or launch consistent instances, you need to copy the AMI from Region to Region.

  29. Elastic Load Balancing supports three types of load balancers: Application Load Balancers, Network Load Balancers, and Classic Load Balancers.

  30. Application Load Balancer:

    1. A load balancer serves as the single point of contact for clients. The load balancer distributes incoming application traffic across multiple targets, such as EC2 instances, in multiple Availability Zones. This increases the availability of your application. You add one or more listeners to your load balancer.

    2. A listener checks for connection requests from clients, using the protocol and port that you configure, and forwards requests to one or more target groups, based on the rules that you define. Each rule specifies a target group, condition, and priority. When the condition is met, the traffic is forwarded to the target group. You must define a default rule for each listener, and you can add rules that specify different target groups based on the content of the request (a.k.a. content-based routing).

    3. Each target group (not auto-scaling group) routes requests to one or more registered targets, such as EC2 instances, using the protocol and port number that you specify. You can register a target with multiple target groups.

    4. You can configure health-checks on a per target group basis. Health-checks are performed on all targets registered to a target group that is specified in a listener rule for your load balancer.

    5. Access Logs for Your Application Load Balancer.

    6. An Application Load Balancer functions at the application layer of the OSI model. After the load balancer receives a request, it evaluates the listener rules (not just evenly send to each target group using cross-zone load balancing) and then selects a target from the target group for the rule action using the round robin routing algorithm.

    7. Benefits over Classic ELB:

      1. Support for path-based routing. You can configure rules for your listener that forward requests based on the URL in the request.

      2. Support for host-based routing. You can configure rules for your listener that forward requests based on the host field in the HTTP header.

      3. Support for routing requests to multiple applications on a single EC2 instance. You can register each instance or IP address with the same target group using multiple ports.

      4. Support for containerized applications. Amazon ECS can select an unused port when scheduling a task and register the task with a target group using this port. (Dynamic port mapping).

      5. Using containers as targets. You can use a micro-services architecture to structure your application as services that you can develop and deploy independently. You can install one or more of these services on each EC2 instance, with each service accepting connections on a different port. You can use a single Application Load Balancer to route requests to all the services for your application. When you register an EC2 instance with a target group, you can register it multiple times; for each service, register the instance using the port for the service.

      6. Support for registering targets by IP address.

      7. Access logs contain additional information.

  31. Classic Load Balancer:

    1. A load balancer distributes incoming application traffic across multiple EC2 instances in multiple Availability Zones. Your load balancer serves as a single point of contact for clients. This increases the availability of your application.

    2. A listener checks for connection requests from clients, using the protocol and port that you configure, and forwards requests to one or more registered instances using the protocol and port number that you configure. You add one or more listeners to your load balancer.

    3. You can configure health checks, which are used to monitor the health of the registered instances so that the load balancer only sends requests to the healthy instances.

    4. Access Logs for Your Classic Load Balancer.

      1. It can be used to capture detailed information about requests sent to your load balancer.

      2. There is no additional charge for access logs.

      3. The logs are stored in S3.

      4. Access logging is disabled by default.

    5. By default, the load balancer distributes traffic evenly across the Availability Zones that you enable for your load balancer (not based on rules evaluating, no rules). To distribute traffic evenly across all registered instances in all enabled Availability Zones, enable Cross-Zone load balancing on your load balancer. Cross-zone load balancing reduces the need to maintain equivalent numbers of instances in each enabled Availability Zone, and improves your application's ability to handle the loss of one or more instances. However, we still recommend that you maintain approximately equivalent numbers of instances in each Availability Zone for better fault tolerance. When you enable Connection Draining on a load balancer, any back-end instances that you deregister will complete requests that are in progress before deregistration. Likewise, if a back-end instance fails health checks, the load balancer will not send any new requests to the unhealthy instance but will allow existing requests to complete.

    6. Benefits over Application ELB:

      1. Support for EC2-Classic (Application ELB only support for VPC)

      2. Support for TCP and SSL listeners (Application ELB doesn't support TCP protocol)

      3. Support for sticky sessions using application-generated cookies

  32. Amazon SWF ensures that a task is assigned/processed only once is never duplicated. With Amazon SQS, you need to handle duplicated messages and may also need to ensure that a message is processed only once, and ensure the consumer deleted the message after processed it. For the question which ask you what should you do if the messages in SQS always be processed more than once, your answer should be using SWF. Using SQS FIFO Queue ensure a exactly-once delivery of message, but you still need to ensure the message has been deleted after processed by the consumer. Using a longer visibility timeout also cannot help you to ensure that.

  33. When defining a Health Check (ELB or Route53), in addition to the port number and protocol, you have to also define the page which will be used for the health check. If you don't have the page defined on the web server, then the health check will always fail.

  34. Health Check on EC2 instances performed by ELB.

    1. To discover the availability of your EC2 instances, a load balancer periodically sends pings (ICMP), attemps connections, or sends requests to test the instances. These tests are called health check. The status of instances that are healthy at the time of the health check is InService, and the state of any instances that are unhealthy at the time of the health check is OutOfService. The load balancer performs health checks on all registered instances, whether the instance is in a healthy state or an unhealthy state.

    2. The load balancer routes requests only to the healthy instances. When the load balancer determines that an instance is unhealthy, it stops routing requests to that instance. The load balancer resumes routing requests to the instance when it has been restored to a healthy state.

  35. SWF for order processing system. Some orders stuck for 3 weeks, why? SWF is awaiting human input from an activity task.

  36. ELB is designed for single Region. EC2 instances can use Multi-AZs to implement HA. Use Auto-scaling group to implement elastic.

  37. When an object is placed in S3, it is done via HTTP via a POST or PUT object request. When a success occurs, you will get a 200 HTTP response. But since a 200 response can also contain error information, a check of the MD5 checksum confirms on whether the request was a success or not.

  38. Security Group is Stateful, but Network ACL is stateless. For example, for an EC2 instance to allow SSH, you can have below configuration:

    1. SG - SSH: Inbound - allow; outbound - deny

    2. NACL - SSH: Inbound - allow; outbound - allow

Last updated

Was this helpful?