Answers to Assessment Test

  1. D. Use the custom IAM policy to configure the permissions to a specific set of resources in your account. The ReadOnlyAccess IAM policy restricts write access but grants access to all resources within your account. AWS account credentials are unrestricted. Policies do not go in an SDK configuration file. They are enforced by AWS on the backend.

  2. C. This is the simplest approach because only a single resource is in the wrong Region. Option A is a possible approach, but it is not the simplest approach because it introduces cross-region calls that may increase latency and cross-region data transfer pricing.

  3. A. Each Amazon VPC is placed in a specific Region and can span all the Availability Zones within that Region. Option B is incorrect because a subnet must be placed within the Region for the selected VPC. Option C is incorrect because edge locations are not available for subnets, and option D is incorrect because you cannot choose specific data centers.

  4. A. Even though each instance in an Amazon VPC has a unique private IP address, you could assign the same private IP address ranges to multiple Amazon VPCs. Therefore, two instances in two different Amazon VPCs in your account could end up with the same private IP address. Options B, C, and D are incorrect because within the same Amazon VPC, there is no duplication of private IP addresses.

  5. A, C. Amazon EBS optimized instances reserve network bandwidth on the instance for I/O, and Provisioned IOPS SSD volumes provide the highest consistent IOPS. Option B is incorrect because instance store is not durable. Option D is incorrect because a previous-generation EBS volume offers an average of 100 IOPS.

  6. C. Migrating the data to Amazon S3 Standard-IA after 30 days using a lifecycle policy is correct. The lifecycle policy will automatically change the storage class for objects aged over 30 days. The Standard-IA storage class is for data that is accessed less frequently, but still requires rapid access when needed. It offers the same high durability, high throughput, and low latency of Standard, with a lower per gigabyte storage price and per gigabyte retrieval fee. Option A is incorrect because RRS provides a lower level of redundancy. The question did not state that the customer is willing to reduce the redundancy level of the data, and RRS does not replicate objects as many times as standard Amazon S3 storage. This storage option enables customers to store noncritical, reproducible data. Option B is incorrect because the fastest retrieval option for Amazon S3 Glacier is typically 3–5 hours. The customer requires retrieval in minutes. Option D is incorrect. Versioning will increase the number of files if new versions of files are being uploaded, which will increase cost. The question did not mention a need for multiple versions of files.

  7. A. Option B is incorrect. You could use Snowmobile, but that would not be as cost effective because it is meant to be used for datasets of 10 PB or more. Option C is incorrect because uploading files directly over the internet to Amazon S3, even using Amazon S3 Transfer Accelerator, would take many months and would be using your on-premises bandwidth. Option D is incorrect because Amazon Kinesis Data Firehose would still be transferring over the internet and take months to complete while using your on-premises bandwidth.

  8. A. DynamoDB is a NoSQL database store that is a good alternative because of its scalability, high availability, and durability characteristics. Many platforms provide open source, drop-in replacement libraries that enable you to store native sessions in DynamoDB. DynamoDB is a suitable candidate for a session storage solution in a share-nothing, distributed architecture.

  9. D. Amazon Redshift is the best choice for data warehouse workloads that typically span multiple data repositories and are at least 2 TB in size.

  10. C. Amazon RDS read replicas provide enhanced performance and durability for Amazon RDS instances. This replication feature makes it easy to scale out elastically beyond the capacity constraints of a single Amazon RDS instance for read-heavy database workloads. You can create one or more replicas of a given source Amazon RDS instance and serve high-volume application read traffic from multiple copies of your data, increasing aggregate read throughput.

  11. C. DynamoDB is the best option. The question states a managed service, so this eliminates the Amazon EC2 service. Additionally, Amazon RDS and Amazon Redshift are SQL database products. The company is looking for a NoSQL product. DynamoDB is a managed NoSQL service.

  12. B. Automatic backups do not retain the backup after the database is deleted. Therefore, option A is incorrect. Option C is incorrect. The AWS Database Migration Service is used to migrate databases from one source to another, which isn’t what you are trying to accomplish here. Option D is incorrect because you cannot SSH into the Amazon RDS database, which is an AWS managed service.

  13. D. The leader node acts as the SQL endpoint and receives queries from client applications, parses the queries, and develops query execution plans. Option A is incorrect because the compute nodes execute the query execution plan. However, the leader node is where you will submit the actual query. Options B and C are incorrect because there is no such thing as a cluster or master node in Amazon Redshift.

  14. B. Amazon Neptune is a managed graph database service, which can be used to build recommendation applications. Option A is incorrect, because Amazon RDS is a managed database service and you are looking for a graph database. Option C is incorrect. Amazon ElastiCache is a caching managed database service. Option D is incorrect. Amazon Redshift is a data warehouse service.

  15. B. A global secondary index enables you to use a different partition key or primary key in addition to a different sort key. Option A is incorrect because a local secondary index can only have a different sort key. Option C is incorrect. A new DynamoDB table would not solve the issue. Option D is incorrect because it is possible to accomplish this.

  16. D. The application is configured to perform an eventually consistent read, which may not return the most up-to-date data. Option A is incorrect—increasing RCUs does not solve the underlying issue. Option B is incorrect because this is a read issue, not a write issue. Option C is incorrect. There is no need to refactor the entire application, because the issue is solvable.

  17. B. DynamoDB Local is the downloadable version of DynamoDB that enables you to write and test applications without accessing the web service. Option A is incorrect. Although you can create a new table, there is a cost associated with this option, so it is not the best option. Option C is incorrect. Even though you can use another NoSQL database, your team is already using DynamoDB. This strategy would require them to learn a new database platform. Additionally, you would have to migrate the database to DynamoDB after development is done. Option D is incorrect for the same reasons as option C.

  18. D. The AWS Encryption SDK is a client-side library designed to streamline data security operations so that customers can follow encryption best practices. It supports the management of data keys, encryption and decryption activities, and the storage of encrypted data. Thus, option D is correct.

  19. A. Options B, C, and D refer to more outdated encryption algorithms. By default, the AWS Encryption SDK uses the industry-recommended AES-256 algorithm.

  20. B. Encryption of Amazon EBS volumes is optional.

  21. B. Elastic Beanstalk automatically deletes your Amazon RDS instance when your environment is deleted and does not automatically retain the data. You must create a snapshot of the Amazon RDS instance to retain the data.

  22. D. Elastic Beanstalk cannot make automated changes to the policies attached to the service roles and instance roles.

  23. C. Option C is correct because if a revision does not pass a manual approval transition (either by expiring or by being rejected), it is treated as a failed revision. Successive revisions can then progress past this approval gate (if they are approved). Pipeline actions for a specific revision will not continue past a rejected approval gate, so option A is incorrect. A notification can be sent to an Amazon Simple Notification Service (Amazon SNS) topic that you specify when a revision reaches a manual approval gate, but no additional notification is sent if a change is rejected; therefore, option B is incorrect. Option D is incorrect, as AWS CodePipeline does not have a concept of “cloning” revisions.

  24. B. Though option D would be time-consuming, it is still possible to create files in the AWS CodeCommit console. Option A is a recommended strategy for migrating a repository containing a large number of files. Option C is also a valid strategy for smaller repositories. However, there is no way to sync files directly from an Amazon S3 bucket to an AWS CodeCommit repository. Thus, option B is correct.

  25. C. Option A is not recommended, because storing binary files in a Git-based repository incurs significant storage costs. Option B can work. However, you would have to pay additional data transfer costs any time a build is started. Option C is the most appropriate choice, because you can update the build container any time you need to change the files. Option D is incorrect, as AWS CodeBuild does not limit the size of files that can be used.

  26. C. Amazon Simple Storage Service (Amazon S3) bucket names are globally unique and cannot be changed after a bucket is created. Thus, options A and B are incorrect. Option D is incorrect because the resource is not being deleted, only updated. Option C is correct because you must create a replacement bucket when changing this property in AWS CloudFormation.

  27. B. Option B is correct because you can manage resources declared in a stack entirely within AWS CloudFormation by performing stack updates. Manually updating the resource outside of AWS CloudFormation (using the AWS Management Console, AWS CLI, or AWS SDK) will result in inconsistencies between the state expected by AWS CloudFormation and the actual resource state. This can cause future stack operations to fail. Thus, options A, C, and D are incorrect.

  28. C. Option A is incorrect because this is not the only time configure events run on instances in a stack. Options B and D are incorrect because the configure event does not run after a deploy event. AWS OpsWorks Stacks issues a configure lifecycle event on all instances in a stack any time a single instance goes offline or comes online. This is so that all instances in a stack can be made “aware” of the instance’s status. Thus, option C is correct.

  29. A, B, C. AWS OpsWorks Stacks includes the ability to manage AWS resources such as Elastic IP addresses, EBS volumes, and Amazon RDS instances. Thus, options A, B, and C are correct. Options D and E are incorrect because OpsWorks Stacks does not include any automatic integrations with Amazon ElastiCache or Amazon Redshift.

  30. A. Option A is correct because Simple Active Directory (Simple AD) can be used to authenticate users of Amazon WorkDocs. Options B, C, and D are incorrect because Amazon Cognito is an identity provider (IdP), and you cannot use Simple AD to authenticate users of Amazon EC2 or Amazon S3.

  31. B. Amazon Cognito acts as an identity provider (IdP) to mobile applications, eliminating the need to embed credentials into the web application itself. Option A is incorrect because if a customer is currently using Active Directory as their IdP, it is not good practice to create another IdP to operate and manage. Option C is incorrect because an Amazon Aurora database that is used to track data does not assign policies. Option D is incorrect because you can use Amazon Cognito to control an application’s access to either an S3 bucket or an Amazon S3 object. You don’t use it to directly control access to that bucket or object.

  32. A. Option A is correct because you want to ingest into Amazon Kinesis Data Streams, pass that into Amazon Kinesis Data Analytics, and finally feed that data into Amazon Kinesis Data Firehose. Option B is incorrect because Kinesis Data Firehose cannot run SQL queries. Option C is incorrect because Kinesis Data Streams cannot run SQL queries. Option D is incorrect because Kinesis Data Analytics cannot run SQL queries against data in Amazon SQS.

  33. D. Option D is correct because Amazon DynamoDB Streams allows Amazon DynamoDB to publish a message every time there is a change in a table. This solution is performant and cost-effective. Option A is incorrect because if you add an item to the orders table in DynamoDB, it does not automatically produce messages in Amazon Simple Queue Service (Amazon SQS). Options B and C are incorrect because if you check the orders table every minute or every second, it will degrade performance and increase costs.

  34. D. AWS Lambda supports Amazon DynamoDB event streams as an event source, which can be polled. You can configure Lambda to poll this stream, look for changes, and create a trigger. Option A is incorrect because this can be accomplished with DynamoDB event streams. Option B is incorrect because this can be accomplished with Lambda. Option C DynamoDB is a supported event source for Lambda.

  35. A. AWS Lambda uses containers to operate and is a managed service—you cannot access the underlying infrastructure. This is a benefit because your organization does not need to worry about security patching and other system maintenance. Option B is incorrect—you cannot access the infrastructure. Recall that Lambda is serverless. Option C is incorrect. AWS Support cannot provide access to the direct environment. Option D is incorrect—the Solutions Architect cannot provide direct access to the environment.

  36. D. AWS Lambda uses three factors when determining cost: the amount of memory allocated, the amount of compute time spent on a function (in 100-ms increments), and the number of times you execute or trigger a function. Options A, B, and C are all incorrect because Lambda is billed based on memory allocated, compute time spent on a function in 100-ms increments, and the number of times that you execute or trigger a function.

  37. C, D. Option A is incorrect because AWS CloudFormation is a service that helps you model and set up your AWS resources. Option B is incorrect because you use Amazon S3 as a storage tool for the internet. Options C and D are correct because they are both caching tools.

  38. C. Option A is incorrect, as authorizers enable you to control access to your APIs by using Amazon Cognito or an AWS Lambda function. Option B is incorrect because API keys are used to provide customers to your API, which is useful for selling your API. Option C is the correct answer. You can use stages to create a separate path with multiple endpoints, such as development and production. Option D is incorrect, as CORS is used to allow one service to call another service.

  39. D. API Gateway supports all of the methods listed. GET, POST, PUT, PATCH, DELETE, HEAD, and OPTIONS are all supported methods.

  40. D. With Amazon API Gateway, you can enable authorization for a particular method with IAM policies, AWS Lambda custom authorizers, and Amazon Cognito user pools. Options A, B, and C are all correct, but option D is the best option because it combines all of them.

  41. B. Option A is incorrect. Though AWS SAM is needed for the YAML/JSON template defining the function, it does not allow for testing the AWS Lambda function locally. Option B is the correct answer. AWS SAM CLI allows you to test the Lambda function locally. Option C is incorrect. AWS CloudFormation is used to deploy resources to the AWS Cloud. Option D is incorrect because AWS SAM CLI is the tool to test Lambda functions locally.

  42. D. Option A is incorrect. Amazon EC2 is a virtual machine service. Option B is incorrect because Amazon ElastiCache deploys clusters of machines, which you are then responsible for scaling. Option C is incorrect because Elastic Beanstalk deploys full stack applications by using Amazon EC2. Option D is correct because ElastiCache can store session state in a NoSQL database. This option is also serverless.

  43. B. With Amazon Cognito, you can create user pools to store user profile information and store attributes such as user name, phone number, address, and so on. Option A is incorrect. Amazon CloudFront is a content delivery network (CDN). Option C is incorrect. Amazon Kinesis is a service that you can implement to collect, process, and analyze streaming data in real time. Option D is incorrect. By using AWS Lambda, you can create custom programming functions for compute processing.

  44. A, B, C. Option D is incorrect because when compared to the other options, a bank balance is not likely to be stored in a cache; it is probably not data that is retrieved as frequently as the others. Options A, B, and C are all better data candidates to cache because multiple users are more likely to access them repeatedly. However, you could also cache the bank account balance for shorter periods if the database query is not performing well.

  45. A, D. Options A and D are correct because Amazon ElastiCache supports both the Redis and Memcached open source caching engines. Option B is incorrect because MySQL is not a caching engine—it is a relational database engine. Option C is incorrect because Couchbase is a NoSQL database and not one of the caching engines that ElastiCache supports.

  46. A. Amazon CloudWatch does not aggregate data across Regions; therefore, option A is correct.

  47. A, B, D. Amazon CloudWatch alarms changes to a state other than INSUFFICIENT_DATA only when the alarm resource has had sufficient time to initialize and there is sufficient data available for the specified metric and period. Option C is incorrect because permissions for sending metrics to CloudWatch are the responsibility of the resource sending the data. Option D is incorrect because the alarm does not create successfully unless it has a valid period.

  48. C. General-purpose instances provide a balance of compute, memory, and networking resources. T2 instances are a low-cost option that provides a small amount of CPU resources that can be increased in short bursts when additional cycles are available. They are well suited for lower-throughput applications, such as administrative applications or low-traffic websites. For more details on the instance types, see https://aws.amazon .com/ec2/instance-types/.

  49. A. AWS Cost Explorer reflects the cost and usage of Amazon Elastic Compute Cloud (Amazon EC2) instances over the most recent 13 months and forecasts potential spending for the next 3 months. By using Cost Explorer, you can examine patterns on how much you spend on AWS resources over time, identify areas that need further inquiry, and view trends that help you understand your costs. In addition, you can specify time ranges for the data and view time data by day or by month. Option D is incorrect because Amazon EC2 Auto Scaling helps you to maintain application availability and enables you to add or remove EC2 instances automatically according to conditions that you define. It does not give you insights into costs incurred.

  50. B. You can use tags to control permissions. Using IAM policies, you can enforce the tag to gain precise control over access to resources, ownership, and accurate cost allocation. Option A is incorrect because eventually deployments become unmanageable, given the scale and rate at which resources get deployed in a successful organization. Options C and D are incorrect because Amazon CloudWatch and AWS Cost Explorer are unrelated to access controls and measures, and these tools monitor resources after they are created.

  51. D. You can choose among the three payment options when you purchase a Standard or Convertible Reserved Instance. With the All Upfront option, you pay for the entire Reserved Instance term with one upfront payment. This option provides you with the largest discount compared to On-Demand Instance pricing. With the Partial Upfront option, you make a low upfront payment and then are charged a discounted hourly rate for the instance for the duration of the Reserved Instance term. The No Upfront option requires no upfront payment and provides a discounted hourly rate for the duration of the term.

  52. B. The performance of the transaction-heavy workloads depends primarily on IOPS; SSD-backed volumes are designed for transactional, IOPS-intensive database workloads, boot volumes, and workloads that require high IOPS. For more information, see https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonEBS.html.

  53. D. Options A, B, and C help in building a high-speed data storage layer that stores a subset of data. This data is typically transient in nature so that future requests for that data are served up faster than is possible by accessing the data’s primary storage location. Option D only supplements the setup of your own caching mechanism, and that is not the preferred solution for this scenario. For more information, see https://aws.amazon.com/caching/aws-caching/.

  54. C. Keeping data together is a basic characteristic of a NoSQL database such as Amazon DynamoDB. Keeping related data in proximity has a major impact on cost and performance. Instead of distributing related data items across multiple tables, keep related items in your NoSQL system as close together as possible. Options A, B, and D are typical characteristics of a relational database.

  55. C. The status code option suggests an inefficient partition key, because few possible status codes lead to uneven distribution of data and cause request throttling. Options A, B, and D suggest the efficient partition keys because of their distinct nature, which leads to an even distribution of the data. For more information, see:

  56. A, B, C. Using a serverless approach means not having to manage servers and not incurring compute costs when there is no user traffic. This is achieved while still offering instant scale to meet high demand, such as a flash sale on an ecommerce site or a social media mention that drives a sudden wave of traffic. Option D is incorrect because AWS Lambda runs your code on a high-availability compute infrastructure and performs all the administration of the compute resources, including server and operating system maintenance, capacity provisioning and automatic scaling, code and security patch deployment, and code monitoring and logging. Option E is incorrect because you can configure Lambda functions to run up to 15 minutes per execution. As a best practice, set the timeout value based on your expected execution time to prevent your function from running longer than intended.

  57. A, B. Use this feature to analyze storage access patterns to help you decide when to transition the right data to the right storage class. This feature observes data access patterns to help you determine when to transition less frequently accessed STANDARD storage to the STANDARD_IA storage class. Option B is correct. A cost allocation tag is a key-value pair that you associate with an Amazon S3 bucket. To manage storage data most effectively, you can use these tags to categorize your Amazon S3 objects and filter on these tags in your data lifecycle policies. Options C and D are incorrect. These options focus on establishing a solution with an efficient data transfer. Option E is incorrect. With AWS Budgets, you can set custom budgets that alert you when your costs or usage exceed (or are forecasted to exceed) your budgeted amount.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset