Winter Sale - Special Limited Time 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: dpt65

Professional-Cloud-Architect Questions and Answers

Question # 6

For this question, refer to the Mountkirk Games case study.

Mountkirk Games wants to set up a continuous delivery pipeline. Their architecture includes many small services that they want to be able to update and roll back quickly. Mountkirk Games has the following requirements:

• Services are deployed redundantly across multiple regions in the US and Europe.

• Only frontend services are exposed on the public internet.

• They can provide a single frontend IP for their fleet of services.

• Deployment artifacts are immutable.

Which set of products should they use?

A.

Google Cloud Storage, Google Cloud Dataflow, Google Compute Engine

B.

Google Cloud Storage, Google App Engine, Google Network Load Balancer

C.

Google Kubernetes Registry, Google Container Engine, Google HTTP(S) Load Balancer

D.

Google Cloud Functions, Google Cloud Pub/Sub, Google Cloud Deployment Manager

Full Access
Question # 7

For this question, refer to the EHR Healthcare case study. You need to define the technical architecture for hybrid connectivity between EHR's on-premises systems and Google Cloud. You want to follow Google's recommended practices for production-level applications. Considering the EHR Healthcare business and technical requirements, what should you do?

A.

Configure two Partner Interconnect connections in one metro (City), and make sure the Interconnect connections are placed in different metro zones.

B.

Configure two VPN connections from on-premises to Google Cloud, and make sure the VPN devices on-premises are in separate racks.

C.

Configure Direct Peering between EHR Healthcare and Google Cloud, and make sure you are peering at least two Google locations.

D.

Configure two Dedicated Interconnect connections in one metro (City) and two connections in another metro, and make sure the Interconnect connections are placed in different metro zones.

Full Access
Question # 8

For this question, refer to the EHR Healthcare case study. You are responsible for designing the Google Cloud network architecture for Google Kubernetes Engine. You want to follow Google best practices. Considering the EHR Healthcare business and technical requirements, what should you do to reduce the attack surface?

A.

Use a private cluster with a private endpoint with master authorized networks configured.

B.

Use a public cluster with firewall rules and Virtual Private Cloud (VPC) routes.

C.

Use a private cluster with a public endpoint with master authorized networks configured.

D.

Use a public cluster with master authorized networks enabled and firewall rules.

Full Access
Question # 9

For this question, refer to the EHR Healthcare case study. You need to define the technical architecture for securely deploying workloads to Google Cloud. You also need to ensure that only verified containers are deployed using Google Cloud services. What should you do? (Choose two.)

A.

Enable Binary Authorization on GKE, and sign containers as part of a CI/CD pipeline.

B.

Configure Jenkins to utilize Kritis to cryptographically sign a container as part of a CI/CD pipeline.

C.

Configure Container Registry to only allow trusted service accounts to create and deploy containers from the registry.

D.

Configure Container Registry to use vulnerability scanning to confirm that there are no vulnerabilities before deploying the workload.

Full Access
Question # 10

For this question, refer to the EHR Healthcare case study. You are responsible for ensuring that EHR's use of Google Cloud will pass an upcoming privacy compliance audit. What should you do? (Choose two.)

A.

Verify EHR's product usage against the list of compliant products on the Google Cloud compliance page.

B.

Advise EHR to execute a Business Associate Agreement (BAA) with Google Cloud.

C.

Use Firebase Authentication for EHR's user facing applications.

D.

Implement Prometheus to detect and prevent security breaches on EHR's web-based applications.

E.

Use GKE private clusters for all Kubernetes workloads.

Full Access
Question # 11

You need to upgrade the EHR connection to comply with their requirements. The new connection design must support business-critical needs and meet the same network and security policy requirements. What should you do?

A.

Add a new Dedicated Interconnect connection.

B.

Upgrade the bandwidth on the Dedicated Interconnect connection to 100 G.

C.

Add three new Cloud VPN connections.

D.

Add a new Carrier Peering connection.

Full Access
Question # 12

Mountkirk Games wants you to secure the connectivity from the new gaming application platform to Google

Cloud. You want to streamline the process and follow Google-recommended practices. What should you do?

A.

Configure Workload Identity and service accounts to be used by the application platform.

B.

Use Kubernetes Secrets, which are obfuscated by default. Configure these Secrets to be used by the

application platform.

C.

Configure Kubernetes Secrets to store the secret, enable Application-Layer Secrets Encryption, and use

Cloud Key Management Service (Cloud KMS) to manage the encryption keys. Configure these Secrets to

be used by the application platform.

D.

Configure HashiCorp Vault on Compute Engine, and use customer managed encryption keys and Cloud

Key Management Service (Cloud KMS) to manage the encryption keys. Configure these Secrets to be used

by the application platform.

Full Access
Question # 13

For this question, refer to the EHR Healthcare case study. In the past, configuration errors put public IP addresses on backend servers that should not have been accessible from the Internet. You need to ensure that no one can put external IP addresses on backend Compute Engine instances and that external IP addresses can only be configured on frontend Compute Engine instances. What should you do?

A.

Create an Organizational Policy with a constraint to allow external IP addresses only on the frontend Compute Engine instances.

B.

Revoke the compute.networkAdmin role from all users in the project with front end instances.

C.

Create an Identity and Access Management (IAM) policy that maps the IT staff to the compute.networkAdmin role for the organization.

D.

Create a custom Identity and Access Management (IAM) role named GCE_FRONTEND with the compute.addresses.create permission.

Full Access
Question # 14

You need to ensure reliability for your application and operations by supporting reliable task a scheduling for compute on GCP. Leveraging Google best practices, what should you do?

A.

Using the Cron service provided by App Engine, publishing messages directly to a message-processing utility service running on Compute Engine instances.

B.

Using the Cron service provided by App Engine, publish messages to a Cloud Pub/Sub topic. Subscribe to that topic using a message-processing utility service running on Compute Engine instances.

C.

Using the Cron service provided by Google Kubernetes Engine (GKE), publish messages directly to a

message-processing utility service running on Compute Engine instances.

D.

Using the Cron service provided by GKE, publish messages to a Cloud Pub/Sub topic. Subscribe to that topic using a message-processing utility service running on Compute Engine instances.

Full Access
Question # 15

For this question, refer to the EHR Healthcare case study. You are a developer on the EHR customer portal team. Your team recently migrated the customer portal application to Google Cloud. The load has increased on the application servers, and now the application is logging many timeout errors. You recently incorporated Pub/Sub into the application architecture, and the application is not logging any Pub/Sub publishing errors. You want to improve publishing latency. What should you do?

A.

Increase the Pub/Sub Total Timeout retry value.

B.

Move from a Pub/Sub subscriber pull model to a push model.

C.

Turn off Pub/Sub message batching.

D.

Create a backup Pub/Sub message queue.

Full Access
Question # 16

Your company has sensitive data in Cloud Storage buckets. Data analysts have Identity Access Management (IAM) permissions to read the buckets. You want to prevent data analysts from retrieving the data in the buckets from outside the office network. What should you do?

A.

1. Create a VPC Service Controls perimeter that includes the projects with the buckets.

2. Create an access level with the CIDR of the office network.

B.

1. Create a firewall rule for all instances in the Virtual Private Cloud (VPC) network for source range.

2. Use the Classless Inter-domain Routing (CIDR) of the office network.

C.

1. Create a Cloud Function to remove IAM permissions from the buckets, and another Cloud Function to add IAM permissions to the buckets.

2. Schedule the Cloud Functions with Cloud Scheduler to add permissions at the start of business and remove permissions at the end of business.

D.

1. Create a Cloud VPN to the office network.

2. Configure Private Google Access for on-premises hosts.

Full Access
Question # 17

You are responsible for the Google Cloud environment in your company Multiple departments need access to their own projects and the members within each department will have the same project responsibilities You want to structure your Google Cloud environment for minimal maintenance and maximum overview of 1AM permissions as each department's projects start and end You want to follow Google-recommended practices What should you do?

A.

Create a Google Group per department and add all department members to their respective groups Create a folder per department

and grant the respective group the required 1AM permissions at the folder level Add the projects under the respective folders

B.

Grant all department members the required 1AM permissions for their respective projects

C.

Create a Google Group per department and add all department members to their respective groups Grant each group the required I AM permissions for their respective projects

D.

Create a folder per department and grant the respective members of the department the required 1AM permissions at the folder level. Structure all projects for each department under the respective folders

Full Access
Question # 18

Your company has just recently activated Cloud Identity to manage users. The Google Cloud Organization has been configured as wed. The security learn needs to secure protects that will be part of the Organization. They want to prohibit IAM users outside the domain from gaining permissions from now on. What should they do?

A.

Configure an organization policy to restrict identities by domain

B.

Configure an organization policy to block creation of service accounts

C.

Configure Cloud Scheduler to trigger a Cloud Function every hour that removes all users that don't belong to the Cloud identity domain from all projects.

Full Access
Question # 19

For this question, refer to the Mountkirk Games case study. You need to analyze and define the technical architecture for the compute workloads for your company, Mountkirk Games. Considering the Mountkirk Games business and technical requirements, what should you do?

A.

Create network load balancers. Use preemptible Compute Engine instances.

B.

Create network load balancers. Use non-preemptible Compute Engine instances.

C.

Create a global load balancer with managed instance groups and autoscaling policies. Use preemptible Compute Engine instances.

D.

Create a global load balancer with managed instance groups and autoscaling policies. Use non-preemptible Compute Engine instances.

Full Access
Question # 20

Your customer runs a web service used by e-commerce sites to offer product recommendations to users. The company has begun experimenting with a machine learning model on Google Cloud Platform to improve the quality of results.

What should the customer do to improve their model’s results over time?

A.

Export Cloud Machine Learning Engine performance metrics from Stackdriver to BigQuery, to be used to analyze the efficiency of the model.

B.

Build a roadmap to move the machine learning model training from Cloud GPUs to Cloud TPUs, which

offer better results.

C.

Monitor Compute Engine announcements for availability of newer CPU architectures, and deploy the model to them as soon as they are available for additional performance.

D.

Save a history of recommendations and results of the recommendations in BigQuery, to be used as training data.

Full Access
Question # 21

You write a Python script to connect to Google BigQuery from a Google Compute Engine virtual machine. The script is printing errors that it cannot connect to BigQuery. What should you do to fix the script?

A.

Install the latest BigQuery API client library for Python

B.

Run your script on a new virtual machine with the BigQuery access scope enabled

C.

Create a new service account with BigQuery access and execute your script with that user

D.

Install the bq component for gccloud with the command gcloud components install bq.

Full Access
Question # 22

Your development teams release new versions of games running on Google Kubernetes Engine (GKE) daily.

You want to create service level indicators (SLIs) to evaluate the quality of the new versions from the user’s

perspective. What should you do?

A.

Create CPU Utilization and Request Latency as service level indicators.

B.

Create GKE CPU Utilization and Memory Utilization as service level indicators.

C.

Create Request Latency and Error Rate as service level indicators.

D.

Create Server Uptime and Error Rate as service level indicators.

Full Access
Question # 23

Your company provides a recommendation engine for retail customers. You are providing retail customers with an API where they can submit a user ID and the API returns a list of recommendations for that user. You are responsible for the API lifecycle and want to ensure stability for your customers in case the API makes backward-incompatible changes. You want to follow Google-recommended practices. What should you do?

A.

Create a distribution list of all customers to inform them of an upcoming backward-incompatible change at least one month before replacing the old API with the new API.

B.

Create an automated process to generate API documentation, and update the public API documentation as part of the CI/CD process when deploying an update to the API.

C.

Use a versioning strategy for the APIs that increases the version number on every backward-incompatible change.

D.

Use a versioning strategy for the APIs that adds the suffix “DEPRECATED” to the current API version number on every backward-incompatible change. Use the current version number for the new API.

Full Access
Question # 24

For this question, refer to the Dress4Win case study.

You want to ensure Dress4Win's sales and tax records remain available for infrequent viewing by auditors for at least 10 years. Cost optimization is your top priority. Which cloud services should you choose?

A.

Google Cloud Storage Coldline to store the data, and gsutil to access the data.

B.

Google Cloud Storage Nearline to store the data, and gsutil to access the data.

C.

Google Bigtabte with US or EU as location to store the data, and gcloud to access the data.

D.

BigQuery to store the data, and a web server cluster in a managed instance group to access the data. Google Cloud SQL mirrored across two distinct regions to store the data, and a Redis cluster in a managed instance group to access the data.

Full Access
Question # 25

You have deployed several instances on Compute Engine. As a security requirement, instances cannot have a public IP address. There is no VPN connection between Google Cloud and your office, and you need to connect via SSH into a specific machine without violating the security requirements. What should you do?

A.

Configure Cloud NAT on the subnet where the instance is hosted. Create an SSH connection to the Cloud NAT IP address to reach the instance.

B.

Add all instances to an unmanaged instance group. Configure TCP Proxy Load Balancing with the instance group as a backend. Connect to the instance using the TCP Proxy IP.

C.

Configure Identity-Aware Proxy (IAP) for the instance and ensure that you have the role of IAP-secured Tunnel User. Use the gcloud command line tool to ssh into the instance.

D.

Create a bastion host in the network to SSH into the bastion host from your office location. From the bastion host, SSH into the desired instance.

Full Access
Question # 26

For this question, refer to the Dress4Win case study.

Dress4Win has asked you to recommend machine types they should deploy their application servers to. How should you proceed?

A.

Perform a mapping of the on-premises physical hardware cores and RAM to the nearest machine types in the cloud.

B.

Recommend that Dress4Win deploy application servers to machine types that offer the highest RAM to CPU ratio available.

C.

Recommend that Dress4Win deploy into production with the smallest instances available, monitor them over time, and scale the machine type up until the desired performance is reached.

D.

Identify the number of virtual cores and RAM associated with the application server virtual machines align them to a custom machine type in the cloud, monitor performance, and scale the machine types up until the desired performance is reached.

Full Access
Question # 27

For this question, refer to the Dress4Win case study.

At Dress4Win, an operations engineer wants to create a tow-cost solution to remotely archive copies of database backup files. The database files are compressed tar files stored in their current data center. How should he proceed?

A.

Create a cron script using gsutil to copy the files to a Coldline Storage bucket.

B.

Create a cron script using gsutil to copy the files to a Regional Storage bucket.

C.

Create a Cloud Storage Transfer Service Job to copy the files to a Coldline Storage bucket.

D.

Create a Cloud Storage Transfer Service job to copy the files to a Regional Storage bucket.

Full Access
Question # 28

For this question, refer to the Dress4Win case study.

As part of their new application experience, Dress4Wm allows customers to upload images of themselves. The customer has exclusive control over who may view these images. Customers should be able to upload images with minimal latency and also be shown their images quickly on the main application page when they log in. Which configuration should Dress4Win use?

A.

Store image files in a Google Cloud Storage bucket. Use Google Cloud Datastore to maintain metadata that maps each customer's ID and their image files.

B.

Store image files in a Google Cloud Storage bucket. Add custom metadata to the uploaded images in Cloud Storage that contains the customer's unique ID.

C.

Use a distributed file system to store customers' images. As storage needs increase, add more persistent disks and/or nodes. Assign each customer a unique ID, which sets each file's owner attribute, ensuring privacy of images.

D.

Use a distributed file system to store customers' images. As storage needs increase, add more persistent disks and/or nodes. Use a Google Cloud SQL database to maintain metadata that maps each customer's ID to their image files.

Full Access
Question # 29

For this question, refer to the Dress4Win case study.

Dress4Win has configured a new uptime check with Google Stackdriver for several of their legacy services. The Stackdriver dashboard is not reporting the services as healthy. What should they do?

A.

Install the Stackdriver agent on all of the legacy web servers.

B.

In the Cloud Platform Console download the list of the uptime servers' IP addresses and create an inbound firewall rule

C.

Configure their load balancer to pass through the User-Agent HTTP header when the value matches GoogleStackdriverMonitoring-UptimeChecks (https://cloud.google.com/monitoring)

D.

Configure their legacy web servers to allow requests that contain user-Agent HTTP header when the value matches GoogleStackdriverMonitoring— UptimeChecks (https://cloud.google.com/monitoring)

Full Access
Question # 30

For this question, refer to the Dress4Win case study.

Dress4Win has end-to-end tests covering 100% of their endpoints. They want to ensure that the move to the cloud does not introduce any new bugs. Which additional testing methods should the developers employ to prevent an outage?

A.

They should enable Google Stackdriver Debugger on the application code to show errors in the code.

B.

They should add additional unit tests and production scale load tests on their cloud staging environment.

C.

They should run the end-to-end tests in the cloud staging environment to determine if the code is working as intended.

D.

They should add canary tests so developers can measure how much of an impact the new release causes to latency.

Full Access
Question # 31

For this question, refer to the Dress4Win case study.

The Dress4Win security team has disabled external SSH access into production virtual machines (VMs) on Google Cloud Platform (GCP). The operations team needs to remotely manage the VMs, build and push Docker containers, and manage Google Cloud Storage objects. What can they do?

A.

Grant the operations engineers access to use Google Cloud Shell.

B.

Configure a VPN connection to GCP to allow SSH access to the cloud VMs.

C.

Develop a new access request process that grants temporary SSH access to cloud VMs when an operations engineer needs to perform a task.

D.

Have the development team build an API service that allows the operations team to execute specific remote procedure calls to accomplish their tasks.

Full Access
Question # 32

For this question, refer to the Dress4Win case study.

Dress4Win has asked you for advice on how to migrate their on-premises MySQL deployment to the cloud. They want to minimize downtime and performance impact to their on-premises solution during the migration. Which approach should you recommend?

A.

Create a dump of the on-premises MySQL master server, and then shut it down, upload it to the cloud environment, and load into a new MySQL cluster.

B.

Setup a MySQL replica server/slave in the cloud environment, and configure it for asynchronous replication from the MySQL master server on-premises until cutover.

C.

Create a new MySQL cluster in the cloud, configure applications to begin writing to both on-premises and cloud MySQL masters, and destroy the original cluster at cutover.

D.

Create a dump of the MySQL replica server into the cloud environment, load it into: Google Cloud Datastore, and configure applications to read/write to Cloud Datastore at cutover.

Full Access
Question # 33

Dress4win has end to end tests covering 100% of their endpoints.

They want to ensure that the move of cloud does not introduce any new bugs.

Which additional testing methods should the developers employ to prevent an outage?

A.

They should run the end to end tests in the cloud staging environment to determine if the code is working as

intended.

B.

They should enable google stack driver debugger on the application code to show errors in the code

C.

They should add additional unit tests and production scale load tests on their cloud staging environment.

D.

They should add canary tests so developers can measure how much of an impact the new release causes to latency

Full Access
Question # 34

For this question, refer to the TerramEarth case study.

The TerramEarth development team wants to create an API to meet the company's business requirements. You want the development team to focus their development effort on business value versus creating a custom framework. Which method should they use?

A.

Use Google App Engine with Google Cloud Endpoints. Focus on an API for dealers and partners.

B.

Use Google App Engine with a JAX-RS Jersey Java-based framework. Focus on an API for the public.

C.

Use Google App Engine with the Swagger (open API Specification) framework. Focus on an API for the public.

D.

Use Google Container Engine with a Django Python container. Focus on an API for the public.

E.

Use Google Container Engine with a Tomcat container with the Swagger (Open API Specification) framework. Focus on an API for dealers and partners.

Full Access
Question # 35

For this question refer to the TerramEarth case study

Operational parameters such as oil pressure are adjustable on each of TerramEarth's vehicles to increase their efficiency, depending on their environmental conditions. Your primary goal is to increase the operating efficiency of all 20 million cellular and unconnected vehicles in the field How can you accomplish this goal?

A.

Have your engineers inspect the data for patterns, and then create an algorithm with rules that make operational adjustments automatically.

B.

Capture all operating data, train machine learning models that identify ideal operations, and run locally to make operational adjustments automatically.

C.

Implement a Google Cloud Dataflow streaming job with a sliding window, and use Google Cloud Messaging (GCM) to make operational adjustments automatically.

D.

Capture all operating data, train machine learning models that identify ideal operations, and host in Google Cloud Machine Learning (ML) Platform to make operational adjustments automatically.

Full Access
Question # 36

For this question, refer to the TerramEarth case study.

TerramEarth's 20 million vehicles are scattered around the world. Based on the vehicle's location its telemetry data is stored in a Google Cloud Storage (GCS) regional bucket (US. Europe, or Asia). The CTO has asked you to run a report on the raw telemetry data to determine why vehicles are breaking down after 100 K miles. You want to run this job on all the data. What is the most cost-effective way to run this job?

A.

Move all the data into 1 zone, then launch a Cloud Dataproc cluster to run the job.

B.

Move all the data into 1 region, then launch a Google Cloud Dataproc cluster to run the job.

C.

Launch a cluster in each region to preprocess and compress the raw data, then move the data into a multi region bucket and use a Dataproc cluster to finish the job.

D.

Launch a cluster in each region to preprocess and compress the raw data, then move the data into a region bucket and use a Cloud Dataproc cluster to finish the jo

Full Access
Question # 37

For this question, refer to the TerramEarth case study

You analyzed TerramEarth's business requirement to reduce downtime, and found that they can achieve a majority of time saving by reducing customers' wait time for parts You decided to focus on reduction of the 3 weeks aggregate reporting time Which modifications to the company's processes should you recommend?

A.

Migrate from CSV to binary format, migrate from FTP to SFTP transport, and develop machine learning analysis of metrics.

B.

Migrate from FTP to streaming transport, migrate from CSV to binary format, and develop machine learning analysis of metrics.

C.

Increase fleet cellular connectivity to 80%, migrate from FTP to streaming transport, and develop machine learning analysis of metrics.

D.

Migrate from FTP to SFTP transport, develop machine learning analysis of metrics, and increase dealer local inventory by a fixed factor.

Full Access
Question # 38

For this question, refer to the TerramEarth case study.

TerramEarth has equipped unconnected trucks with servers and sensors to collet telemetry data. Next year they want to use the data to train machine learning models. They want to store this data in the cloud while reducing costs. What should they do?

A.

Have the vehicle’ computer compress the data in hourly snapshots, and store it in a Google Cloud storage (GCS) Nearline bucket.

B.

Push the telemetry data in Real-time to a streaming dataflow job that compresses the data, and store it in Google BigQuery.

C.

Push the telemetry data in real-time to a streaming dataflow job that compresses the data, and store it in Cloud Bigtable.

D.

Have the vehicle's computer compress the data in hourly snapshots, a Store it in a GCS Coldline bucket.

Full Access
Question # 39

Your agricultural division is experimenting with fully autonomous vehicles.

You want your architecture to promote strong security during vehicle operation.

Which two architecture should you consider?

Choose 2 answers:

A.

Treat every micro service call between modules on the vehicle as untrusted.

B.

Require IPv6 for connectivity to ensure a secure address space.

C.

Use a trusted platform module (TPM) and verify firmware and binaries on boot.

D.

Use a functional programming language to isolate code execution cycles.

E.

Use multiple connectivity subsystems for redundancy.

F.

Enclose the vehicle's drive electronics in a Faraday cage to isolate chips.

Full Access
Question # 40

Your customer is receiving reports that their recently updated Google App Engine application is taking approximately 30 seconds to load for some of their users. This behavior was not reported before the update. What strategy should you take?

A.

Work with your ISP to diagnose the problem.

B.

Open a support ticket to ask for network capture and flow data to diagnose the problem, then roll back your application.

C.

Roll back to an earlier known good release initially, then use Stackdriver Trace and logging to diagnose the problem in a development/test/staging environment.

D.

Roll back to an earlier known good release, then push the release again at a quieter period to investigate. Then use Stackdriver Trace and logging to diagnose the problem.

Full Access
Question # 41

For this question, refer to the TerramEarth case study.

To speed up data retrieval, more vehicles will be upgraded to cellular connections and be able to transmit data to the ETL process. The current FTP process is error-prone and restarts the data transfer from the start of the file when connections fail, which happens often. You want to improve the reliability of the solution and minimize data transfer time on the cellular connections. What should you do?

A.

Use one Google Container Engine cluster of FTP servers. Save the data to a Multi-Regional bucket. Run the ETL process using data in the bucket.

B.

Use multiple Google Container Engine clusters running FTP servers located in different regions. Save the data to Multi-Regional buckets in us, eu, and asia. Run the ETL process using the data in the bucket.

C.

Directly transfer the files to different Google Cloud Multi-Regional Storage bucket locations in us, eu, and asia using Google APIs over HTTP(S). Run the ETL process using the data in the bucket.

D.

Directly transfer the files to a different Google Cloud Regional Storage bucket location in us, eu, and asia using Google APIs over HTTP(S). Run the ETL process to retrieve the data from each Regional bucket.

Full Access
Question # 42

For this question, refer to the TerramEarth case study

Your development team has created a structured API to retrieve vehicle data. They want to allow third parties to develop tools for dealerships that use this vehicle event data. You want to support delegated authorization against this data. What should you do?

A.

Build or leverage an OAuth-compatible access control system.

B.

Build SAML 2.0 SSO compatibility into your authentication system.

C.

Restrict data access based on the source IP address of the partner systems.

D.

Create secondary credentials for each dealer that can be given to the trusted third party.

Full Access
Question # 43

For this question refer to the TerramEarth case study.

Which of TerramEarth's legacy enterprise processes will experience significant change as a result of increased Google Cloud Platform adoption.

A.

Opex/capex allocation, LAN changes, capacity planning

B.

Capacity planning, TCO calculations, opex/capex allocation

C.

Capacity planning, utilization measurement, data center expansion

D.

Data Center expansion, TCO calculations, utilization measurement

Full Access
Question # 44

You are implementing Firestore for Mountkirk Games. Mountkirk Games wants to give a new game

programmatic access to a legacy game's Firestore database. Access should be as restricted as possible. What

should you do?

A.

Create a service account (SA) in the legacy game's Google Cloud project, add this SA in the new game's IAM page, and then give it the Firebase Admin role in both projects

B.

Create a service account (SA) in the legacy game's Google Cloud project, add a second SA in the new game's IAM page, and then give the Organization Admin role to both SAs

C.

Create a service account (SA) in the legacy game's Google Cloud project, give it the Firebase Admin role, and then migrate the new game to the legacy game's project.

D.

Create a service account (SA) in the lgacy game's Google Cloud project, give the SA the Organization Admin rule and then give it the Firebase Admin role in both projects

Full Access
Question # 45

For this question, refer to the Mountkirk Games case study. You need to analyze and define the technical architecture for the database workloads for your company, Mountkirk Games. Considering the business and technical requirements, what should you do?

A.

Use Cloud SQL for time series data, and use Cloud Bigtable for historical data queries.

B.

Use Cloud SQL to replace MySQL, and use Cloud Spanner for historical data queries.

C.

Use Cloud Bigtable to replace MySQL, and use BigQuery for historical data queries.

D.

Use Cloud Bigtable for time series data, use Cloud Spanner for transactional data, and use BigQuery for historical data queries.

Full Access
Question # 46

For this question, refer to the JencoMart case study.

The JencoMart security team requires that all Google Cloud Platform infrastructure is deployed using a least privilege model with separation of duties for administration between production and development resources. What Google domain and project structure should you recommend?

A.

Create two G Suite accounts to manage users: one for development/test/staging and one for production. Each account should contain one project for every application.

B.

Create two G Suite accounts to manage users: one with a single project for all development applications and one with a single project for all production applications.

C.

Create a single G Suite account to manage users with each stage of each application in its own project.

D.

Create a single G Suite account to manage users with one project for the development/test/staging environment and one project for the production environment.

Full Access
Question # 47

For this question, refer to the JencoMart case study

A few days after JencoMart migrates the user credentials database to Google Cloud Platform and shuts down the old server, the new database server stops responding to SSH connections. It is still serving database requests to the application servers correctly. What three steps should you take to diagnose the problem? Choose 3 answers

A.

Delete the virtual machine (VM) and disks and create a new one.

B.

Delete the instance, attach the disk to a new VM, and investigate.

C.

Take a snapshot of the disk and connect to a new machine to investigate.

D.

Check inbound firewall rules for the network the machine is connected to.

E.

Connect the machine to another network with very simple firewall rules and investigate.

F.

Print the Serial Console output for the instance for troubleshooting, activate the interactive console, and investigate.

Full Access
Question # 48

For this question, refer to the JencoMart case study.

The migration of JencoMart’s application to Google Cloud Platform (GCP) is progressing too slowly. The infrastructure is shown in the diagram. You want to maximize throughput. What are three potential bottlenecks? (Choose 3 answers.)

A.

A single VPN tunnel, which limits throughput

B.

A tier of Google Cloud Storage that is not suited for this task

C.

A copy command that is not suited to operate over long distances

D.

Fewer virtual machines (VMs) in GCP than on-premises machines

E.

A separate storage layer outside the VMs, which is not suited for this task

F.

Complicated internet connectivity between the on-premises infrastructure and GCP

Full Access
Question # 49

For this question, refer to the JencoMart case study.

JencoMart has built a version of their application on Google Cloud Platform that serves traffic to Asia. You want to measure success against their business and technical goals. Which metrics should you track?

A.

Error rates for requests from Asia

B.

Latency difference between US and Asia

C.

Total visits, error rates, and latency from Asia

D.

Total visits and average latency for users in Asia

E.

The number of character sets present in the database

Full Access
Question # 50

For this question, refer to the JencoMart case study.

JencoMart has decided to migrate user profile storage to Google Cloud Datastore and the application servers to Google Compute Engine (GCE). During the migration, the existing infrastructure will need access to Datastore to upload the data. What service account key-management strategy should you recommend?

A.

Provision service account keys for the on-premises infrastructure and for the GCE virtual machines (VMs).

B.

Authenticate the on-premises infrastructure with a user account and provision service account keys for the VMs.

C.

Provision service account keys for the on-premises infrastructure and use Google Cloud Platform (GCP) managed keys for the VMs

D.

Deploy a custom authentication service on GCE/Google Container Engine (GKE) for the on-premises infrastructure and use GCP managed keys for the VMs.

Full Access
Question # 51

For this question, refer to the JencoMart case study.

JencoMart wants to move their User Profiles database to Google Cloud Platform. Which Google Database should they use?

A.

Cloud Spanner

B.

Google BigQuery

C.

Google Cloud SQL

D.

Google Cloud Datastore

Full Access
Question # 52

For this question, refer to the TerramEarth case study. You are asked to design a new architecture for the

ingestion of the data of the 200,000 vehicles that are connected to a cellular network. You want to follow

Google-recommended practices.

Considering the technical requirements, which components should you use for the ingestion of the data?

A.

Google Kubernetes Engine with an SSL Ingress

B.

Cloud IoT Core with public/private key pairs

C.

Compute Engine with project-wide SSH keys

D.

Compute Engine with specific SSH keys

Full Access
Question # 53

For this question, refer to the TerramEarth case study. You need to implement a reliable, scalable GCP solution for the data warehouse for your company, TerramEarth. Considering the TerramEarth business and technical requirements, what should you do?

A.

Replace the existing data warehouse with BigQuery. Use table partitioning.

B.

Replace the existing data warehouse with a Compute Engine instance with 96 CPUs.

C.

Replace the existing data warehouse with BigQuery. Use federated data sources.

D.

Replace the existing data warehouse with a Compute Engine instance with 96 CPUs. Add an additional Compute Engine pre-emptible instance with 32 CPUs.

Full Access
Question # 54

For this question, refer to the TerramEarth case study.

You start to build a new application that uses a few Cloud Functions for the backend. One use case requires a Cloud Function func_display to invoke another Cloud Function func_query. You want func_query only to accept invocations from func_display. You also want to follow Google's recommended best practices. What should you do?

A.

Create a token and pass it in as an environment variable to func_display. When invoking func_query, include the token in the request Pass the same token to func _query and reject the invocation if the tokens are different.

B.

Make func_query 'Require authentication.' Create a unique service account and associate it to func_display. Grant the service account invoker role for func_query. Create an id token in func_display and include the token to the request when invoking func_query.

C.

Make func _query 'Require authentication' and only accept internal traffic. Create those two functions in the same VPC. Create an ingress firewall rule for func_query to only allow traffic from func_display.

D.

Create those two functions in the same project and VPC. Make func_query only accept internal traffic. Create an ingress firewall for func_query to only allow traffic from func_display. Also, make sure both functions use the same service account.

Full Access
Question # 55

For this question, refer to the Helicopter Racing League (HRL) case study. Your team is in charge of creating a

payment card data vault for card numbers used to bill tens of thousands of viewers, merchandise consumers,

and season ticket holders. You need to implement a custom card tokenization service that meets the following

requirements:

• It must provide low latency at minimal cost.

• It must be able to identify duplicate credit cards and must not store plaintext card numbers.

• It should support annual key rotation.

Which storage approach should you adopt for your tokenization service?

A.

Store the card data in Secret Manager after running a query to identify duplicates.

B.

Encrypt the card data with a deterministic algorithm stored in Firestore using Datastore mode.

C.

Encrypt the card data with a deterministic algorithm and shard it across multiple Memorystore instances.

D.

Use column-level encryption to store the data in Cloud SQL.

Full Access
Question # 56

For this question, refer to the Helicopter Racing League (HRL) case study. Recently HRL started a new regional

racing league in Cape Town, South Africa. In an effort to give customers in Cape Town a better user

experience, HRL has partnered with the Content Delivery Network provider, Fastly. HRL needs to allow traffic

coming from all of the Fastly IP address ranges into their Virtual Private Cloud network (VPC network). You are

a member of the HRL security team and you need to configure the update that will allow only the Fastly IP

address ranges through the External HTTP(S) load balancer. Which command should you use?

A.

glouc compute firewall rules update hlr-policy \

--priority 1000 \

target tags-sourceiplist fastly \

--allow tcp:443

B.

gcloud compute security policies rules update 1000 \

--security-policy hlr-policy \

--expression "evaluatePreconfiguredExpr('sourceiplist-fastly')" \

--action " allow"

C.

gcloud compute firewall rules update

sourceiplist-fastly \

priority 1000 \

allow tcp: 443

D.

gcloud compute priority-policies rules update

1000 \

security policy from fastly

--src- ip-ranges"

-- action " allow"

Full Access
Question # 57

For this question, refer to the Helicopter Racing League (HRL) case study. HRL wants better prediction

accuracy from their ML prediction models. They want you to use Google’s AI Platform so HRL can understand

and interpret the predictions. What should you do?

A.

Use Explainable AI.

B.

Use Vision AI.

C.

Use Google Cloud’s operations suite.

D.

Use Jupyter Notebooks.

Full Access
Question # 58

For this question, refer to the Helicopter Racing League (HRL) case study. The HRL development team

releases a new version of their predictive capability application every Tuesday evening at 3 a.m. UTC to a

repository. The security team at HRL has developed an in-house penetration test Cloud Function called Airwolf.

The security team wants to run Airwolf against the predictive capability application as soon as it is released

every Tuesday. You need to set up Airwolf to run at the recurring weekly cadence. What should you do?

A.

Set up Cloud Tasks and a Cloud Storage bucket that triggers a Cloud Function.

B.

Set up a Cloud Logging sink and a Cloud Storage bucket that triggers a Cloud Function.

C.

Configure the deployment job to notify a Pub/Sub queue that triggers a Cloud Function.

D.

Set up Identity and Access Management (IAM) and Confidential Computing to trigger a Cloud Function.

Full Access
Question # 59

For this question, refer to the Helicopter Racing League (HRL) case study. HRL is looking for a cost-effective

approach for storing their race data such as telemetry. They want to keep all historical records, train models

using only the previous season's data, and plan for data growth in terms of volume and information collected.

You need to propose a data solution. Considering HRL business requirements and the goals expressed by

CEO S. Hawke, what should you do?

A.

Use Firestore for its scalable and flexible document-based database. Use collections to aggregate race data

by season and event.

B.

Use Cloud Spanner for its scalability and ability to version schemas with zero downtime. Split race data

using season as a primary key.

C.

Use BigQuery for its scalability and ability to add columns to a schema. Partition race data based on

season.

D.

Use Cloud SQL for its ability to automatically manage storage increases and compatibility with MySQL. Use

separate database instances for each season.

Full Access
Question # 60

For this question, refer to the Helicopter Racing League (HRL) case study. A recent finance audit of cloud

infrastructure noted an exceptionally high number of Compute Engine instances are allocated to do video

encoding and transcoding. You suspect that these Virtual Machines are zombie machines that were not deleted

after their workloads completed. You need to quickly get a list of which VM instances are idle. What should you

do?

A.

Log into each Compute Engine instance and collect disk, CPU, memory, and network usage statistics for

analysis.

B.

Use the gcloud compute instances list to list the virtual machine instances that have the idle: true label set.

C.

Use the gcloud recommender command to list the idle virtual machine instances.

D.

From the Google Console, identify which Compute Engine instances in the managed instance groups are

no longer responding to health check probes.

Full Access
Question # 61

For this question, refer to the Dress4Win case study. To be legally compliant during an audit, Dress4Win must be able to give insights in all administrative actions that modify the configuration or metadata of resources on Google Cloud.

What should you do?

A.

Use Stackdriver Trace to create a trace list analysis.

B.

Use Stackdriver Monitoring to create a dashboard on the project’s activity.

C.

Enable Cloud Identity-Aware Proxy in all projects, and add the group of Administrators as a member.

D.

Use the Activity page in the GCP Console and Stackdriver Logging to provide the required insight.

Full Access
Question # 62

For this question, refer to the Dress4Win case study. Dress4Win is expected to grow to 10 times its size in 1 year with a corresponding growth in data and traffic that mirrors the existing patterns of usage. The CIO has set the target of migrating production infrastructure to the cloud within the next 6 months. How will you configure the solution to scale for this growth without making major application changes and still maximize the ROI?

A.

Migrate the web application layer to App Engine, and MySQL to Cloud Datastore, and NAS to Cloud Storage. Deploy RabbitMQ, and deploy Hadoop servers using Deployment Manager.

B.

Migrate RabbitMQ to Cloud Pub/Sub, Hadoop to BigQuery, and NAS to Compute Engine with Persistent Disk storage. Deploy Tomcat, and deploy Nginx using Deployment Manager.

C.

Implement managed instance groups for Tomcat and Nginx. Migrate MySQL to Cloud SQL, RabbitMQ to Cloud Pub/Sub, Hadoop to Cloud Dataproc, and NAS to Compute Engine with Persistent Disk storage.

D.

Implement managed instance groups for the Tomcat and Nginx. Migrate MySQL to Cloud SQL, RabbitMQ to Cloud Pub/Sub, Hadoop to Cloud Dataproc, and NAS to Cloud Storage.

Full Access
Question # 63

For this question, refer to the Dress4Win case study. Which of the compute services should be migrated as –is and would still be an optimized architecture for performance in the cloud?

A.

Web applications deployed using App Engine standard environment

B.

RabbitMQ deployed using an unmanaged instance group

C.

Hadoop/Spark deployed using Cloud Dataproc Regional in High Availability mode

D.

Jenkins, monitoring, bastion hosts, security scanners services deployed on custom machine types

Full Access