Special Summer Sale - Special 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: 70dumps

Associate-Data-Practitioner Questions and Answers

Question # 6

Your company is adopting BigQuery as their data warehouse platform. Your team has experienced Python developers. You need to recommend a fully-managed tool to build batch ETL processes that extract data from various source systems, transform the data using a variety of Google Cloud services, and load the transformed data into BigQuery. You want this tool to leverage your team’s Python skills. What should you do?

A.

Use Dataform with assertions.

B.

Deploy Cloud Data Fusion and included plugins.

C.

Use Cloud Composer with pre-built operators.

D.

Use Dataflow and pre-built templates.

Full Access
Question # 7

You are developing a data ingestion pipeline to load small CSV files into BigQuery from Cloud Storage. You want to load these files upon arrival to minimize data latency. You want to accomplish this with minimal cost and maintenance. What should you do?

A.

Use the bq command-line tool within a Cloud Shell instance to load the data into BigQuery.

B.

Create a Cloud Composer pipeline to load new files from Cloud Storage to BigQuery and schedule it to run every 10 minutes.

C.

Create a Cloud Run function to load the data into BigQuery that is triggered when data arrives in Cloud Storage.

D.

Create a Dataproc cluster to pull CSV files from Cloud Storage, process them using Spark, and write the results to BigQuery.

Full Access
Question # 8

Your organization has decided to migrate their existing enterprise data warehouse to BigQuery. The existing data pipeline tools already support connectors to BigQuery. You need to identify a data migration approach that optimizes migration speed. What should you do?

A.

Create a temporary file system to facilitate data transfer from the existing environment to Cloud Storage. Use Storage Transfer Service to migrate the data into BigQuery.

B.

Use the Cloud Data Fusion web interface to build data pipelines. Create a directed acyclic graph (DAG) that facilitates pipeline orchestration.

C.

Use the existing data pipeline tool’s BigQuery connector to reconfigure the data mapping.

D.

Use the BigQuery Data Transfer Service to recreate the data pipeline and migrate the data into BigQuery.

Full Access
Question # 9

You are a data analyst at your organization. You have been given a BigQuery dataset that includes customer information. The dataset contains inconsistencies and errors, such as missing values, duplicates, and formatting issues. You need to effectively and quickly clean the data. What should you do?

A.

Develop a Dataflow pipeline to read the data from BigQuery, perform data quality rules and transformations, and write the cleaned data back to BigQuery.

B.

Use Cloud Data Fusion to create a data pipeline to read the data from BigQuery, perform data quality transformations, and write the clean data back to BigQuery.

C.

Export the data from BigQuery to CSV files. Resolve the errors using a spreadsheet editor, and re-import the cleaned data into BigQuery.

D.

Use BigQuery's built-in functions to perform data quality transformations.

Full Access
Question # 10

Another team in your organization is requesting access to a BigQuery dataset. You need to share the dataset with the team while minimizing the risk of unauthorized copying of data. You also want tocreate a reusable framework in case you need to share this data with other teams in the future. What should you do?

A.

Create authorized views in the team’s Google Cloud project that is only accessible by the team.

B.

Create a private exchange using Analytics Hub with data egress restriction, and grant access to the team members.

C.

Enable domain restricted sharing on the project. Grant the team members the BigQuery Data Viewer IAM role on the dataset.

D.

Export the dataset to a Cloud Storage bucket in the team’s Google Cloud project that is only accessible by the team.

Full Access
Question # 11

You work for a healthcare company that has a large on-premises data system containing patient records with personally identifiable information (PII) such as names, addresses, and medical diagnoses. You need a standardized managed solution that de-identifies PII across all your data feeds prior to ingestion to Google Cloud. What should you do?

A.

Use Cloud Run functions to create a serverless data cleaning pipeline. Store the cleaned data in BigQuery.

B.

Use Cloud Data Fusion to transform the data. Store the cleaned data in BigQuery.

C.

Load the data into BigQuery, and inspect the data by using SQL queries. Use Dataflow to transform the data and remove any errors.

D.

Use Apache Beam to read the data and perform the necessary cleaning and transformation operations. Store the cleaned data in BigQuery.

Full Access
Question # 12

Your company is building a near real-time streaming pipeline to process JSON telemetry data from small appliances. You need to process messages arriving at a Pub/Sub topic, capitalize letters in the serial number field, and write results to BigQuery. You want to use a managed service and write a minimal amount of code for underlying transformations. What should you do?

A.

Use a Pub/Sub to BigQuery subscription, write results directly to BigQuery, and schedule a transformation query to run every five minutes.

B.

Use a Pub/Sub to Cloud Storage subscription, write a Cloud Run service that is triggered when objects arrive in the bucket, performs the transformations, and writes the results to BigQuery.

C.

Use the “Pub/Sub to BigQuery” Dataflow template with a UDF, and write the results to BigQuery.

D.

Use a Pub/Sub push subscription, write a Cloud Run service that accepts the messages, performs the transformations, and writes the results to BigQuery.

Full Access
Question # 13

Your team wants to create a monthly report to analyze inventory data that is updated daily. You need to aggregate the inventory counts by using only the most recent month of data, and save the results to be used in a Looker Studio dashboard. What should you do?

A.

Create a materialized view in BigQuery that uses the SUM( ) function and the DATE_SUB( ) function.

B.

Create a saved query in the BigQuery console that uses the SUM( ) function and the DATE_SUB( ) function. Re-run the saved query every month, and save the results to a BigQuery table.

C.

Create a BigQuery table that uses the SUM( ) function and the _PARTITIONDATE filter.

D.

Create a BigQuery table that uses the SUM( ) function and the DATE_DIFF( ) function.

Full Access
Question # 14

You need to create a weekly aggregated sales report based on a large volume of data. You want to use Python to design an efficient process for generating this report. What should you do?

A.

Create a Cloud Run function that uses NumPy. Use Cloud Scheduler to schedule the function to run once a week.

B.

Create a Colab Enterprise notebook and use the bigframes.pandas library. Schedule the notebook to execute once a week.

C.

Create a Cloud Data Fusion and Wrangler flow. Schedule the flow to run once a week.

D.

Create a Dataflow directed acyclic graph (DAG) coded in Python. Use Cloud Scheduler to schedule the code to run once a week.

Full Access
Question # 15

Your organization stores highly personal data in BigQuery and needs to comply with strict data privacy regulations. You need to ensure that sensitive data values are rendered unreadable whenever an employee leaves the organization. What should you do?

A.

Use AEAD functions and delete keys when employees leave the organization.

B.

Use dynamic data masking and revoke viewer permissions when employees leave the organization.

C.

Use customer-managed encryption keys (CMEK) and delete keys when employees leave the organization.

D.

Use column-level access controls with policy tags and revoke viewer permissions when employees leave the organization.

Full Access
Question # 16

You are a data analyst working with sensitive customer data in BigQuery. You need to ensure that only authorized personnel within your organization can query this data, while following the principle of least privilege. What should you do?

A.

Enable access control by using IAM roles.

B.

Update dataset privileges by using the SQL GRANT statement.

C.

Export the data to Cloud Storage, and use signed URLs to authorize access.

D.

Encrypt the data by using customer-managed encryption keys (CMEK).

Full Access
Question # 17

Your organization has several datasets in BigQuery. The datasets need to be shared with your external partners so that they can run SQL queries without needing to copy the data to their own projects. You have organized each partner’s data in its own BigQuery dataset. Each partner should be able to access only their data. You want to share the data while following Google-recommended practices. What should you do?

A.

Use Analytics Hub to create a listing on a private data exchange for each partner dataset. Allow each partner to subscribe to their respective listings.

B.

Create a Dataflow job that reads from each BigQuery dataset and pushes the data into a dedicated Pub/Sub topic for each partner. Grant each partner the pubsub. subscriber IAM role.

C.

Export the BigQuery data to a Cloud Storage bucket. Grant the partners the storage.objectUser IAM role on the bucket.

D.

Grant the partners the bigquery.user IAM role on the BigQuery project.

Full Access
Question # 18

You manage an ecommerce website that has a diverse range of products. You need to forecast future product demand accurately to ensure that your company has sufficient inventory to meet customer needs and avoid stockouts. Your company's historical sales data is stored in a BigQuery table. You need to create a scalable solution that takes into account the seasonality and historical data to predict product demand. What should you do?

A.

Use the historical sales data to train and create a BigQuery ML time series model. Use the ML.FORECAST function call to output the predictions into a new BigQuery table.

B.

Use Colab Enterprise to create a Jupyter notebook. Use the historical sales data to train a custom prediction model in Python.

C.

Use the historical sales data to train and create a BigQuery ML linear regression model. Use the ML.PREDICT function call to output the predictions into a new BigQuery table.

D.

Use the historical sales data to train and create a BigQuery ML logistic regression model. Use the ML.PREDICT function call to output the predictions into a new BigQuery table.

Full Access
Question # 19

You work for a global financial services company that trades stocks 24/7. You have a Cloud SGL for PostgreSQL user database. You need to identify a solution that ensures that the database is continuously operational, minimizes downtime, and will not lose any data in the event of a zonal outage. What should you do?

A.

Continuously back up the Cloud SGL instance to Cloud Storage. Create a Compute Engine instance with PostgreSCL in a different region. Restore the backup in the Compute Engine instance if a failure occurs.

B.

Create a read replica in another region. Promote the replica to primary if a failure occurs.

C.

Configure and create a high-availability Cloud SQL instance with the primary instance in zone A and a secondary instance in any zone other than zone A.

D.

Create a read replica in the same region but in a different zone.

Full Access
Question # 20

Your organization’s business analysts require near real-time access to streaming data. However, they are reporting that their dashboard queries are loading slowly. After investigating BigQuery query performance, you discover the slow dashboard queries perform several joins and aggregations.

You need to improve the dashboard loading time and ensure that the dashboard data is as up-to-date as possible. What should you do?

A.

Disable BiqQuery query result caching.

B.

Modify the schema to use parameterized data types.

C.

Create a scheduled query to calculate and store intermediate results.

D.

Create materialized views.

Full Access
Question # 21

Your organization has several datasets in their data warehouse in BigQuery. Several analyst teams in different departments use the datasets to run queries. Your organization is concerned about the variability of their monthly BigQuery costs. You need to identify a solution that creates a fixed budget for costs associated with the queries run by each department. What should you do?

A.

Create a custom quota for each analyst in BigQuery.

B.

Create a single reservation by using BigQuery editions. Assign all analysts to the reservation.

C.

Assign each analyst to a separate project associated with their department. Create a single reservation by using BigQuery editions. Assign all projects to the reservation.

D.

Assign each analyst to a separate project associated with their department. Create a single reservation for each department by using BigQuery editions. Create assignments for each project in the appropriate reservation.

Full Access
Question # 22

You need to design a data pipeline to process large volumes of raw server log data stored in Cloud Storage. The data needs to be cleaned, transformed, and aggregated before being loaded into BigQuery for analysis. The transformation involves complex data manipulation using Spark scripts that your team developed. You need to implement a solution that leverages your team’s existing skillset, processes data at scale, and minimizes cost. What should you do?

A.

Use Dataflow with a custom template for the transformation logic.

B.

Use Cloud Data Fusion to visually design and manage the pipeline.

C.

Use Dataform to define the transformations in SQLX.

D.

Use Dataproc to run the transformations on a cluster.

Full Access
Question # 23

You need to transfer approximately 300 TB of data from your company's on-premises data center to Cloud Storage. You have 100 Mbps internet bandwidth, and the transfer needs to be completed as quickly as possible. What should you do?

A.

Use Cloud Client Libraries to transfer the data over the internet.

B.

Use the gcloud storage command to transfer the data over the internet.

C.

Compress the data, upload it to multiple cloud storage providers, and then transfer the data to Cloud Storage.

D.

Request a Transfer Appliance, copy the data to the appliance, and ship it back to Google.

Full Access
Question # 24

Your organization uses scheduled queries to perform transformations on data stored in BigQuery. You discover that one of your scheduled queries has failed. You need to troubleshoot the issue as quickly as possible. What should you do?

A.

Navigate to the Logs Explorer page in Cloud Logging. Use filters to find the failed job, and analyze the error details.

B.

Set up a log sink using the gcloud CLI to export BigQuery audit logs to BigQuery. Query those logs to identify the error associated with the failed job ID.

C.

Request access from your admin to the BigQuery information_schema. Query the jobs view with the failed job ID, and analyze error details.

D.

Navigate to the Scheduled queries page in the Google Cloud console. Select the failed job, and analyze the error details.

Full Access
Question # 25

You recently inherited a task for managing Dataflow streaming pipelines in your organization and noticed that proper access had not been provisioned to you. You need to request a Google-provided IAM role so you can restart the pipelines. You need to follow the principle of least privilege. What should you do?

A.

Request the Dataflow Developer role.

B.

Request the Dataflow Viewer role.

C.

Request the Dataflow Worker role.

D.

Request the Dataflow Admin role.

Full Access
Question # 26

Your organization plans to move their on-premises environment to Google Cloud. Your organization’s network bandwidth is less than 1 Gbps. You need to move over 500 ТВ of data to Cloud Storage securely, and only have a few days to move the data. What should you do?

A.

Request multiple Transfer Appliances, copy the data to the appliances, and ship the appliances back to Google Cloud to upload the data to Cloud Storage.

B.

Connect to Google Cloud using VPN. Use Storage Transfer Service to move the data to Cloud Storage.

C.

Connect to Google Cloud using VPN. Use the gcloud storage command to move the data to Cloud Storage.

D.

Connect to Google Cloud using Dedicated Interconnect. Use the gcloud storage command to move the data to Cloud Storage.

Full Access