PDF Only
A PHP Error was encountered
Severity: Warning
Message: number_format() expects parameter 1 to be float, string given
Filename: helpers/custom_helper.php
Line Number: 6
Backtrace:
File: /home/u331533098/domains/dumpspool.com/public_html/application/helpers/custom_helper.php
Line: 6
Function: number_format
File: /home/u331533098/domains/dumpspool.com/public_html/application/views/exam.php
Line: 44
Function: numberFormat
File: /home/u331533098/domains/dumpspool.com/public_html/application/controllers/Decision.php
Line: 96
Function: view
File: /home/u331533098/domains/dumpspool.com/public_html/index.php
Line: 315
Function: require_once
A PHP Error was encountered
Severity: Notice
Message: Undefined offset: 1
Filename: helpers/custom_helper.php
Line Number: 8
Backtrace:
File: /home/u331533098/domains/dumpspool.com/public_html/application/helpers/custom_helper.php
Line: 8
Function: _error_handler
File: /home/u331533098/domains/dumpspool.com/public_html/application/views/exam.php
Line: 44
Function: numberFormat
File: /home/u331533098/domains/dumpspool.com/public_html/application/controllers/Decision.php
Line: 96
Function: view
File: /home/u331533098/domains/dumpspool.com/public_html/index.php
Line: 315
Function: require_once
$.
Free Updates Upto 90 Days
A PHP Error was encountered
Severity: Warning
Message: number_format() expects parameter 1 to be float, string given
Filename: helpers/custom_helper.php
Line Number: 6
Backtrace:
File: /home/u331533098/domains/dumpspool.com/public_html/application/helpers/custom_helper.php
Line: 6
Function: number_format
File: /home/u331533098/domains/dumpspool.com/public_html/application/views/exam.php
Line: 44
Function: numberFormat
File: /home/u331533098/domains/dumpspool.com/public_html/application/controllers/Decision.php
Line: 96
Function: view
File: /home/u331533098/domains/dumpspool.com/public_html/index.php
Line: 315
Function: require_once
A PHP Error was encountered
Severity: Notice
Message: Undefined offset: 1
Filename: helpers/custom_helper.php
Line Number: 8
Backtrace:
File: /home/u331533098/domains/dumpspool.com/public_html/application/helpers/custom_helper.php
Line: 8
Function: _error_handler
File: /home/u331533098/domains/dumpspool.com/public_html/application/views/exam.php
Line: 44
Function: numberFormat
File: /home/u331533098/domains/dumpspool.com/public_html/application/controllers/Decision.php
Line: 96
Function: view
File: /home/u331533098/domains/dumpspool.com/public_html/index.php
Line: 315
Function: require_once
- Data-Engineer-Associate Dumps PDF
- 130 Questions
- Updated On November 18, 2024
PDF + Test Engine
A PHP Error was encountered
Severity: Warning
Message: number_format() expects parameter 1 to be float, string given
Filename: helpers/custom_helper.php
Line Number: 6
Backtrace:
File: /home/u331533098/domains/dumpspool.com/public_html/application/helpers/custom_helper.php
Line: 6
Function: number_format
File: /home/u331533098/domains/dumpspool.com/public_html/application/views/exam.php
Line: 74
Function: numberFormat
File: /home/u331533098/domains/dumpspool.com/public_html/application/controllers/Decision.php
Line: 96
Function: view
File: /home/u331533098/domains/dumpspool.com/public_html/index.php
Line: 315
Function: require_once
A PHP Error was encountered
Severity: Notice
Message: Undefined offset: 1
Filename: helpers/custom_helper.php
Line Number: 8
Backtrace:
File: /home/u331533098/domains/dumpspool.com/public_html/application/helpers/custom_helper.php
Line: 8
Function: _error_handler
File: /home/u331533098/domains/dumpspool.com/public_html/application/views/exam.php
Line: 74
Function: numberFormat
File: /home/u331533098/domains/dumpspool.com/public_html/application/controllers/Decision.php
Line: 96
Function: view
File: /home/u331533098/domains/dumpspool.com/public_html/index.php
Line: 315
Function: require_once
$.
Free Updates Upto 90 Days
A PHP Error was encountered
Severity: Warning
Message: number_format() expects parameter 1 to be float, string given
Filename: helpers/custom_helper.php
Line Number: 6
Backtrace:
File: /home/u331533098/domains/dumpspool.com/public_html/application/helpers/custom_helper.php
Line: 6
Function: number_format
File: /home/u331533098/domains/dumpspool.com/public_html/application/views/exam.php
Line: 74
Function: numberFormat
File: /home/u331533098/domains/dumpspool.com/public_html/application/controllers/Decision.php
Line: 96
Function: view
File: /home/u331533098/domains/dumpspool.com/public_html/index.php
Line: 315
Function: require_once
A PHP Error was encountered
Severity: Notice
Message: Undefined offset: 1
Filename: helpers/custom_helper.php
Line Number: 8
Backtrace:
File: /home/u331533098/domains/dumpspool.com/public_html/application/helpers/custom_helper.php
Line: 8
Function: _error_handler
File: /home/u331533098/domains/dumpspool.com/public_html/application/views/exam.php
Line: 74
Function: numberFormat
File: /home/u331533098/domains/dumpspool.com/public_html/application/controllers/Decision.php
Line: 96
Function: view
File: /home/u331533098/domains/dumpspool.com/public_html/index.php
Line: 315
Function: require_once
- Data-Engineer-Associate Question Answers
- 130 Questions
- Updated On November 18, 2024
Test Engine
A PHP Error was encountered
Severity: Warning
Message: number_format() expects parameter 1 to be float, string given
Filename: helpers/custom_helper.php
Line Number: 6
Backtrace:
File: /home/u331533098/domains/dumpspool.com/public_html/application/helpers/custom_helper.php
Line: 6
Function: number_format
File: /home/u331533098/domains/dumpspool.com/public_html/application/views/exam.php
Line: 104
Function: numberFormat
File: /home/u331533098/domains/dumpspool.com/public_html/application/controllers/Decision.php
Line: 96
Function: view
File: /home/u331533098/domains/dumpspool.com/public_html/index.php
Line: 315
Function: require_once
A PHP Error was encountered
Severity: Notice
Message: Undefined offset: 1
Filename: helpers/custom_helper.php
Line Number: 8
Backtrace:
File: /home/u331533098/domains/dumpspool.com/public_html/application/helpers/custom_helper.php
Line: 8
Function: _error_handler
File: /home/u331533098/domains/dumpspool.com/public_html/application/views/exam.php
Line: 104
Function: numberFormat
File: /home/u331533098/domains/dumpspool.com/public_html/application/controllers/Decision.php
Line: 96
Function: view
File: /home/u331533098/domains/dumpspool.com/public_html/index.php
Line: 315
Function: require_once
$.
Free Updates Upto 90 Days
A PHP Error was encountered
Severity: Warning
Message: number_format() expects parameter 1 to be float, string given
Filename: helpers/custom_helper.php
Line Number: 6
Backtrace:
File: /home/u331533098/domains/dumpspool.com/public_html/application/helpers/custom_helper.php
Line: 6
Function: number_format
File: /home/u331533098/domains/dumpspool.com/public_html/application/views/exam.php
Line: 104
Function: numberFormat
File: /home/u331533098/domains/dumpspool.com/public_html/application/controllers/Decision.php
Line: 96
Function: view
File: /home/u331533098/domains/dumpspool.com/public_html/index.php
Line: 315
Function: require_once
A PHP Error was encountered
Severity: Notice
Message: Undefined offset: 1
Filename: helpers/custom_helper.php
Line Number: 8
Backtrace:
File: /home/u331533098/domains/dumpspool.com/public_html/application/helpers/custom_helper.php
Line: 8
Function: _error_handler
File: /home/u331533098/domains/dumpspool.com/public_html/application/views/exam.php
Line: 104
Function: numberFormat
File: /home/u331533098/domains/dumpspool.com/public_html/application/controllers/Decision.php
Line: 96
Function: view
File: /home/u331533098/domains/dumpspool.com/public_html/index.php
Line: 315
Function: require_once
- Data-Engineer-Associate Practice Questions
- 130 Questions
- Updated On November 18, 2024
How to pass Amazon Data-Engineer-Associate exam with the help of dumps?
DumpsPool provides you the finest quality resources you’ve been looking for to no avail. So, it's due time you stop stressing and get ready for the exam. Our Online Test Engine provides you with the guidance you need to pass the certification exam. We guarantee top-grade results because we know we’ve covered each topic in a precise and understandable manner. Our expert team prepared the latest Amazon Data-Engineer-Associate Dumps to satisfy your need for training. Plus, they are in two different formats: Dumps PDF and Online Test Engine.
How Do I Know Amazon Data-Engineer-Associate Dumps are Worth it?
Did we mention our latest Data-Engineer-Associate Dumps PDF is also available as Online Test Engine? And that’s just the point where things start to take root. Of all the amazing features you are offered here at DumpsPool, the money-back guarantee has to be the best one. Now that you know you don’t have to worry about the payments. Let us explore all other reasons you would want to buy from us. Other than affordable Real Exam Dumps, you are offered three-month free updates.
You can easily scroll through our large catalog of certification exams. And, pick any exam to start your training. That’s right, DumpsPool isn’t limited to just Amazon Exams. We trust our customers need the support of an authentic and reliable resource. So, we made sure there is never any outdated content in our study resources. Our expert team makes sure everything is up to the mark by keeping an eye on every single update. Our main concern and focus are that you understand the real exam format. So, you can pass the exam in an easier way!
IT Students Are Using our AWS Certified Data Engineer - Associate (DEA-C01) Dumps Worldwide!
It is a well-established fact that certification exams can’t be conquered without some help from experts. The point of using AWS Certified Data Engineer - Associate (DEA-C01) Practice Question Answers is exactly that. You are constantly surrounded by IT experts who’ve been through you are about to and know better. The 24/7 customer service of DumpsPool ensures you are in touch with these experts whenever needed. Our 100% success rate and validity around the world, make us the most trusted resource candidates use. The updated Dumps PDF helps you pass the exam on the first attempt. And, with the money-back guarantee, you feel safe buying from us. You can claim your return on not passing the exam.
How to Get Data-Engineer-Associate Real Exam Dumps?
Getting access to the real exam dumps is as easy as pressing a button, literally! There are various resources available online, but the majority of them sell scams or copied content. So, if you are going to attempt the Data-Engineer-Associate exam, you need to be sure you are buying the right kind of Dumps. All the Dumps PDF available on DumpsPool are as unique and the latest as they can be. Plus, our Practice Question Answers are tested and approved by professionals. Making it the top authentic resource available on the internet. Our expert has made sure the Online Test Engine is free from outdated & fake content, repeated questions, and false plus indefinite information, etc. We make every penny count, and you leave our platform fully satisfied!
Frequently Asked Questions
Question # 1
A data engineer needs Amazon Athena queries to finish faster. The data engineer noticesthat all the files the Athena queries use are currently stored in uncompressed .csv format.The data engineer also notices that users perform most queries by selecting a specificcolumn.Which solution will MOST speed up the Athena query performance?
A. Change the data format from .csvto JSON format. Apply Snappy compression.
B. Compress the .csv files by using Snappy compression.
C. Change the data format from .csvto Apache Parquet. Apply Snappy compression.
D. Compress the .csv files by using gzjg compression.
Question # 2
A company stores data in a data lake that is in Amazon S3. Some data that the company stores in the data lake contains personally identifiable information (PII). Multiple usergroups need to access the raw data. The company must ensure that user groups canaccess only the PII that they require.Which solution will meet these requirements with the LEAST effort?
A. Use Amazon Athena to query the data. Set up AWS Lake Formation and create datafilters to establish levels of access for the company's IAM roles. Assign each user to theIAM role that matches the user's PII access requirements.
B. Use Amazon QuickSight to access the data. Use column-level security features inQuickSight to limit the PII that users can retrieve from Amazon S3 by using AmazonAthena. Define QuickSight access levels based on the PII access requirements of theusers.
C. Build a custom query builder UI that will run Athena queries in the background to accessthe data. Create user groups in Amazon Cognito. Assign access levels to the user groupsbased on the PII access requirements of the users.
D. Create IAM roles that have different levels of granular access. Assign the IAM roles toIAM user groups. Use an identity-based policy to assign access levels to user groups at thecolumn level.
Question # 3
A company receives call logs as Amazon S3 objects that contain sensitive customerinformation. The company must protect the S3 objects by using encryption. The companymust also use encryption keys that only specific employees can access.Which solution will meet these requirements with the LEAST effort?
A. Use an AWS CloudHSM cluster to store the encryption keys. Configure the process thatwrites to Amazon S3 to make calls to CloudHSM to encrypt and decrypt the objects.Deploy an IAM policy that restricts access to the CloudHSM cluster.
B. Use server-side encryption with customer-provided keys (SSE-C) to encrypt the objectsthat contain customer information. Restrict access to the keys that encrypt the objects.
C. Use server-side encryption with AWS KMS keys (SSE-KMS) to encrypt the objects thatcontain customer information. Configure an IAM policy that restricts access to the KMSkeys that encrypt the objects.
D. Use server-side encryption with Amazon S3 managed keys (SSE-S3) to encrypt theobjects that contain customer information. Configure an IAM policy that restricts access tothe Amazon S3 managed keys that encrypt the objects.
Question # 4
A data engineer needs to maintain a central metadata repository that users access throughAmazon EMR and Amazon Athena queries. The repository needs to provide the schemaand properties of many tables. Some of the metadata is stored in Apache Hive. The dataengineer needs to import the metadata from Hive into the central metadata repository.Which solution will meet these requirements with the LEAST development effort?
A. Use Amazon EMR and Apache Ranger.
B. Use a Hive metastore on an EMR cluster.
C. Use the AWS Glue Data Catalog.
D. Use a metastore on an Amazon RDS for MySQL DB instance.
Question # 5
A company is planning to use a provisioned Amazon EMR cluster that runs Apache Sparkjobs to perform big data analysis. The company requires high reliability. A big data teammust follow best practices for running cost-optimized and long-running workloads onAmazon EMR. The team must find a solution that will maintain the company's current levelof performance.Which combination of resources will meet these requirements MOST cost-effectively?(Choose two.)
A. Use Hadoop Distributed File System (HDFS) as a persistent data store.
B. Use Amazon S3 as a persistent data store.
C. Use x86-based instances for core nodes and task nodes.
D. Use Graviton instances for core nodes and task nodes.
E. Use Spot Instances for all primary nodes.
Question # 6
A company wants to implement real-time analytics capabilities. The company wants to useAmazon Kinesis Data Streams and Amazon Redshift to ingest and process streaming dataat the rate of several gigabytes per second. The company wants to derive near real-timeinsights by using existing business intelligence (BI) and analytics tools.Which solution will meet these requirements with the LEAST operational overhead?
A. Use Kinesis Data Streams to stage data in Amazon S3. Use the COPY command toload data from Amazon S3 directly into Amazon Redshift to make the data immediatelyavailable for real-time analysis.
B. Access the data from Kinesis Data Streams by using SQL queries. Create materializedviews directly on top of the stream. Refresh the materialized views regularly to query themost recent stream data.
C. Create an external schema in Amazon Redshift to map the data from Kinesis DataStreams to an Amazon Redshift object. Create a materialized view to read data from thestream. Set the materialized view to auto refresh.
D. Connect Kinesis Data Streams to Amazon Kinesis Data Firehose. Use Kinesis DataFirehose to stage the data in Amazon S3. Use the COPY command to load the data fromAmazon S3 to a table in Amazon Redshift.
Question # 7
A company stores details about transactions in an Amazon S3 bucket. The company wantsto log all writes to the S3 bucket into another S3 bucket that is in the same AWS Region.Which solution will meet this requirement with the LEAST operational effort?
A. Configure an S3 Event Notifications rule for all activities on the transactions S3 bucket toinvoke an AWS Lambda function. Program the Lambda function to write the event toAmazon Kinesis Data Firehose. Configure Kinesis Data Firehose to write the event to thelogs S3 bucket.
B. Create a trail of management events in AWS CloudTraiL. Configure the trail to receivedata from the transactions S3 bucket. Specify an empty prefix and write-only events.Specify the logs S3 bucket as the destination bucket.
C. Configure an S3 Event Notifications rule for all activities on the transactions S3 bucket toinvoke an AWS Lambda function. Program the Lambda function to write the events to thelogs S3 bucket.
D. Create a trail of data events in AWS CloudTraiL. Configure the trail to receive data fromthe transactions S3 bucket. Specify an empty prefix and write-only events. Specify the logsS3 bucket as the destination bucket.
Question # 8
A data engineer has a one-time task to read data from objects that are in Apache Parquetformat in an Amazon S3 bucket. The data engineer needs to query only one column of thedata.Which solution will meet these requirements with the LEAST operational overhead?
A. Confiqure an AWS Lambda function to load data from the S3 bucket into a pandasdataframe- Write a SQL SELECT statement on the dataframe to query the requiredcolumn.
B. Use S3 Select to write a SQL SELECT statement to retrieve the required column fromthe S3 objects.
C. Prepare an AWS Glue DataBrew project to consume the S3 objects and to query the required column.
D. Run an AWS Glue crawler on the S3 objects. Use a SQL SELECT statement in AmazonAthena to query the required column.
Question # 9
A retail company has a customer data hub in an Amazon S3 bucket. Employees from manycountries use the data hub to support company-wide analytics. A governance team mustensure that the company's data analysts can access data only for customers who arewithin the same country as the analysts.Which solution will meet these requirements with the LEAST operational effort?
A. Create a separate table for each country's customer data. Provide access to eachanalyst based on the country that the analyst serves.
B. Register the S3 bucket as a data lake location in AWS Lake Formation. Use the LakeFormation row-level security features to enforce the company's access policies.
C. Move the data to AWS Regions that are close to the countries where the customers are.Provide access to each analyst based on the country that the analyst serves.
D. Load the data into Amazon Redshift. Create a view for each country. Create separate1AM roles for each country to provide access to data from each country. Assign theappropriate roles to the analysts.
Question # 10
A company uses Amazon RDS to store transactional data. The company runs an RDS DBinstance in a private subnet. A developer wrote an AWS Lambda function with defaultsettings to insert, update, or delete data in the DB instance.The developer needs to give the Lambda function the ability to connect to the DB instanceprivately without using the public internet.Which combination of steps will meet this requirement with the LEAST operationaloverhead? (Choose two.)
A. Turn on the public access setting for the DB instance.
B. Update the security group of the DB instance to allow only Lambda function invocationson the database port.
C. Configure the Lambda function to run in the same subnet that the DB instance uses.
D. Attach the same security group to the Lambda function and the DB instance. Include aself-referencing rule that allows access through the database port.
E. Update the network ACL of the private subnet to include a self-referencing rule thatallows access through the database port.
Question # 11
A company has five offices in different AWS Regions. Each office has its own humanresources (HR) department that uses a unique IAM role. The company stores employeerecords in a data lake that is based on Amazon S3 storage. A data engineering team needs to limit access to the records. Each HR department shouldbe able to access records for only employees who are within the HR department's Region.Which combination of steps should the data engineering team take to meet thisrequirement with the LEAST operational overhead? (Choose two.)
A. Use data filters for each Region to register the S3 paths as data locations.
B. Register the S3 path as an AWS Lake Formation location.
C. Modify the IAM roles of the HR departments to add a data filter for each department'sRegion.
D. Enable fine-grained access control in AWS Lake Formation. Add a data filter for eachRegion.
E. Create a separate S3 bucket for each Region. Configure an IAM policy to allow S3access. Restrict access based on Region.
Question # 12
A healthcare company uses Amazon Kinesis Data Streams to stream real-time health datafrom wearable devices, hospital equipment, and patient records.A data engineer needs to find a solution to process the streaming data. The data engineerneeds to store the data in an Amazon Redshift Serverless warehouse. The solution must support near real-time analytics of the streaming data and the previous day's data.Which solution will meet these requirements with the LEAST operational overhead?
A. Load data into Amazon Kinesis Data Firehose. Load the data into Amazon Redshift.
B. Use the streaming ingestion feature of Amazon Redshift.
C. Load the data into Amazon S3. Use the COPY command to load the data into AmazonRedshift.
D. Use the Amazon Aurora zero-ETL integration with Amazon Redshift.
Question # 13
A company is migrating a legacy application to an Amazon S3 based data lake. A dataengineer reviewed data that is associated with the legacy application. The data engineerfound that the legacy data contained some duplicate information.The data engineer must identify and remove duplicate information from the legacyapplication data.Which solution will meet these requirements with the LEAST operational overhead?
A. Write a custom extract, transform, and load (ETL) job in Python. Use theDataFramedrop duplicatesf) function by importingthe Pandas library to perform datadeduplication.
B. Write an AWS Glue extract, transform, and load (ETL) job. Usethe FindMatchesmachine learning(ML) transform to transform the data to perform data deduplication.
C. Write a custom extract, transform, and load (ETL) job in Python. Import the Pythondedupe library. Use the dedupe library to perform data deduplication.
D. Write an AWS Glue extract, transform, and load (ETL) job. Import the Python dedupelibrary. Use the dedupe library to perform data deduplication.
Question # 14
A company needs to build a data lake in AWS. The company must provide row-level dataaccess and column-level data access to specific teams. The teams will access the data byusing Amazon Athena, Amazon Redshift Spectrum, and Apache Hive from Amazon EMR.Which solution will meet these requirements with the LEAST operational overhead?
A. Use Amazon S3 for data lake storage. Use S3 access policies to restrict data access byrows and columns. Provide data access throughAmazon S3.
B. Use Amazon S3 for data lake storage. Use Apache Ranger through Amazon EMR torestrict data access byrows and columns. Providedata access by using Apache Pig.
C. Use Amazon Redshift for data lake storage. Use Redshift security policies to restrictdata access byrows and columns. Provide data accessby usingApache Spark and AmazonAthena federated queries.
D. UseAmazon S3 for data lake storage. Use AWS Lake Formation to restrict data accessby rows and columns. Provide data access through AWS Lake Formation.
Question # 15
A company uses an Amazon Redshift provisioned cluster as its database. The Redshiftcluster has five reserved ra3.4xlarge nodes and uses key distribution.A data engineer notices that one of the nodes frequently has a CPU load over 90%. SQLQueries that run on the node are queued. The other four nodes usually have a CPU loadunder 15% during daily operations.The data engineer wants to maintain the current number of compute nodes. The dataengineer also wants to balance the load more evenly across all five compute nodes.Which solution will meet these requirements?
A. Change the sort key to be the data column that is most often used in a WHERE clauseof the SQL SELECT statement.
B. Change the distribution key to the table column that has the largest dimension.
C. Upgrade the reserved node from ra3.4xlarqe to ra3.16xlarqe.
D. Change the primary key to be the data column that is most often used in a WHEREclause of the SQL SELECT statement.
Question # 16
A company is developing an application that runs on Amazon EC2 instances. Currently, thedata that the application generates is temporary. However, the company needs to persistthe data, even if the EC2 instances are terminated.A data engineer must launch new EC2 instances from an Amazon Machine Image (AMI)and configure the instances to preserve the data.Which solution will meet this requirement?
A. Launch new EC2 instances by using an AMI that is backed by an EC2 instance storevolume that contains the application data. Apply the default settings to the EC2 instances.
B. Launch new EC2 instances by using an AMI that is backed by a root Amazon ElasticBlock Store (Amazon EBS) volume that contains the application data. Apply the defaultsettings to the EC2 instances.
C. Launch new EC2 instances by using an AMI that is backed by an EC2 instance storevolume. Attach an Amazon Elastic Block Store (Amazon EBS) volume to contain theapplication data. Apply the default settings to the EC2 instances.
D. Launch new EC2 instances by using an AMI that is backed by an Amazon Elastic BlockStore (Amazon EBS) volume. Attach an additional EC2 instance store volume to containthe application data. Apply the default settings to the EC2 instances.
Question # 17
A data engineer must ingest a source of structured data that is in .csv format into anAmazon S3 data lake. The .csv files contain 15 columns. Data analysts need to runAmazon Athena queries on one or two columns of the dataset. The data analysts rarelyquery the entire file.Which solution will meet these requirements MOST cost-effectively?
A. Use an AWS Glue PySpark job to ingest the source data into the data lake in .csvformat.
B. Create an AWS Glue extract, transform, and load (ETL) job to read from the .csvstructured data source. Configure the job to ingest the data into the data lake in JSONformat.C. Use an AWS Glue PySpark job to ingest the source data into the data lake in ApacheAvro format.
D. Create an AWS Glue extract, transform, and load (ETL) job to read from the .csvstructured data source. Configure the job to write the data into the data lake in ApacheParquet format.
Question # 18
A data engineer uses Amazon Redshift to run resource-intensive analytics processes onceevery month. Every month, the data engineer creates a new Redshift provisioned cluster.The data engineer deletes the Redshift provisioned cluster after the analytics processesare complete every month. Before the data engineer deletes the cluster each month, thedata engineer unloads backup data from the cluster to an Amazon S3 bucket.The data engineer needs a solution to run the monthly analytics processes that does notrequire the data engineer to manage the infrastructure manually.Which solution will meet these requirements with the LEAST operational overhead?
A. Use Amazon Step Functions to pause the Redshift cluster when the analytics processesare complete and to resume the cluster to run new processes every month.
B. Use Amazon Redshift Serverless to automatically process the analytics workload.
C. Use the AWS CLI to automatically process the analytics workload.
D. Use AWS CloudFormation templates to automatically process the analytics workload.
Question # 19
A financial company wants to use Amazon Athena to run on-demand SQL queries on apetabyte-scale dataset to support a business intelligence (BI) application. An AWS Glue jobthat runs during non-business hours updates the dataset once every day. The BIapplication has a standard data refresh frequency of 1 hour to comply with companypolicies. A data engineer wants to cost optimize the company's use of Amazon Athena withoutadding any additional infrastructure costs.Which solution will meet these requirements with the LEAST operational overhead?
A. Configure an Amazon S3 Lifecycle policy to move data to the S3 Glacier Deep Archivestorage class after 1 day
B. Use the query result reuse feature of Amazon Athena for the SQL queries.
C. Add an Amazon ElastiCache cluster between the Bl application and Athena.
D. Change the format of the files that are in the dataset to Apache Parquet.
Question # 20
A company uses an Amazon Redshift cluster that runs on RA3 nodes. The company wantsto scale read and write capacity to meet demand. A data engineer needs to identify asolution that will turn on concurrency scaling.Which solution will meet this requirement?
A. Turn on concurrency scaling in workload management (WLM) for Redshift Serverlessworkgroups.
B. Turn on concurrency scaling at the workload management (WLM) queue level in theRedshift cluster.
C. Turn on concurrency scaling in the settings duringthe creation of andnew Redshiftcluster.
D. Turn on concurrency scaling for the daily usage quota for the Redshift cluster.
Question # 21
A company has a production AWS account that runs company workloads. The company'ssecurity team created a security AWS account to store and analyze security logs from theproduction AWS account. The security logs in the production AWS account are stored inAmazon CloudWatch Logs. The company needs to use Amazon Kinesis Data Streams to deliver the security logs tothe security AWS account.Which solution will meet these requirements?
A. Create a destination data stream in the production AWS account. In the security AWSaccount, create an IAM role that has cross-account permissions to Kinesis Data Streams inthe production AWS account.
B. Create a destination data stream in the security AWS account. Create an IAM role and atrust policy to grant CloudWatch Logs the permission to put data into the stream. Create asubscription filter in the security AWS account.
C. Create a destination data stream in the production AWS account. In the production AWSaccount, create an IAM role that has cross-account permissions to Kinesis Data Streams inthe security AWS account.
D. Create a destination data stream in the security AWS account. Create an IAM role and atrust policy to grant CloudWatch Logs the permission to put data into the stream. Create asubscription filter in the production AWS account.
Question # 22
A company is migrating on-premises workloads to AWS. The company wants to reduceoverall operational overhead. The company also wants to explore serverless options.The company's current workloads use Apache Pig, Apache Oozie, Apache Spark, ApacheHbase, and Apache Flink. The on-premises workloads process petabytes of data inseconds. The company must maintain similar or better performance after the migration toAWS.Which extract, transform, and load (ETL) service will meet these requirements?
A. AWS Glue
B. Amazon EMR
C. AWS Lambda
D. Amazon Redshift
Question # 23
A data engineering team is using an Amazon Redshift data warehouse for operationalreporting. The team wants to prevent performance issues that might result from longrunningqueries. A data engineer must choose a system table in Amazon Redshift to recordanomalies when a query optimizer identifies conditions that might indicate performanceissues.Which table views should the data engineer use to meet this requirement?
A. STL USAGE CONTROL
B. STL ALERT EVENT LOG
C. STL QUERY METRICS
D. STL PLAN INFO
Question # 24
A media company wants to improve a system that recommends media content to customerbased on user behavior and preferences. To improve the recommendation system, thecompany needs to incorporate insights from third-party datasets into the company'sexisting analytics platform.The company wants to minimize the effort and time required to incorporate third-partydatasets. Which solution will meet these requirements with the LEAST operational overhead?
A. Use API calls to access and integrate third-party datasets from AWS Data Exchange.
B. Use API calls to access and integrate third-party datasets from AWS
C. Use Amazon Kinesis Data Streams to access and integrate third-party datasets fromAWS CodeCommit repositories.
D. Use Amazon Kinesis Data Streams to access and integrate third-party datasets fromAmazon Elastic Container Registry (Amazon ECR).
Question # 25
A company uses an on-premises Microsoft SQL Server database to store financialtransaction data. The company migrates the transaction data from the on-premisesdatabase to AWS at the end of each month. The company has noticed that the cost tomigrate data from the on-premises database to an Amazon RDS for SQL Server databasehas increased recently.The company requires a cost-effective solution to migrate the data to AWS. The solutionmust cause minimal downtown for the applications that access the database.Which AWS service should the company use to meet these requirements?
A. AWS Lambda
B. AWS Database Migration Service (AWS DMS)
C. AWS Direct Connect
D. AWS DataSync
Question # 26
A company has used an Amazon Redshift table that is named Orders for 6 months. Thecompany performs weekly updates and deletes on the table. The table has an interleavedsort key on a column that contains AWS Regions.The company wants to reclaim disk space so that the company will not run out of storagespace. The company also wants to analyze the sort key column.Which Amazon Redshift command will meet these requirements?
A. VACUUM FULL Orders
B. VACUUM DELETE ONLY Orders
C. VACUUM REINDEX Orders
D. VACUUM SORT ONLY Orders
Leave a comment
Your email address will not be published. Required fields are marked *