New Year Sale - Special 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: 70dumps

Professional-Cloud-Developer Questions and Answers

Question # 6

You are a developer at a large organization. Your team uses Git for source code management (SCM). You want to ensure that your team follows Google-recommended best practices to manage code to drive higher rates of software delivery. Which SCM process should your team use?

A.

Each group of developers creates a feature branch from the main branch for their work, commits their changes to their branch, and merges their code into the main branch before each major release.

B.

Each developer commits their code to the main branch before each product release, conducts testing, and rolls back if integration issues are detected.

C.

Each group of developers copies the repository, commits their changes to their repository, and merges their code into the main repository before each product release.

D.

Each developer creates a branch for their own work, commits their changes to their branch, and merges their code into the main branch after peer review.

Full Access
Question # 7

You recently developed a new application. You want to deploy the application on Cloud Run without a Dockerfile. Your organization requires that all container images are pushed to a centrally managed container repository. How should you build your container using Google Cloud services? (Choose two.)

A.

Push your source code to Artifact Registry.

B.

Submit a Cloud Build job to push the image.

C.

Use the pack build command with pack CLI.

D.

Include the --source flag with the gcloud run deploy CLI command.

E.

Include the --platform=kubernetes flag with the gcloud run deploy CLI command.

Full Access
Question # 8

You are developing a flower ordering application Currently you have three microservices.

• Order Service (receives the orders).

• Order Fulfillment Service (processes the orders).

• Notification Service (notifies the customer when the order is filled).

You need to determine how the services will communicate with each other. You want incoming orders to be processed quickly and you need to collect order information for fulfillment. You also want to make sure orders are not lost between your services and are able to communicate asynchronously. How should the requests be processed?

A.

B.

C.

D.

Full Access
Question # 9

You have an application deployed in Google Kubernetes Engine (GKE). You need to update the application to make authorized requests to Google Cloud managed services. You want this to be a one-time setup, and you need to follow security best practices of auto-rotating your security keys and storing them in an encrypted store. You already created a service account with appropriate access to the Google Cloud service. What should you do next?

A.

Assign the Google Cloud service account to your GKE Pod using Workload Identity.

B.

Export the Google Cloud service account, and share it with the Pod as a Kubernetes Secret.

C.

Export the Google Cloud service account, and embed it in the source code of the application.

D.

Export the Google Cloud service account, and upload it to HashiCorp Vault to generate a dynamic service account for your application.

Full Access
Question # 10

You support an application that uses the Cloud Storage API. You review the logs and discover multiple HTTP 503 Service Unavailable error responses from the API. Your application logs the error and does not take any further action. You want to implement Google-recommended retry logic to improve success rates. Which approach should you take?

A.

Retry the failures in batch after a set number of failures is logged.

B.

Retry each failure at a set time interval up to a maximum number of times.

C.

Retry each failure at increasing time intervals up to a maximum number of tries.

D.

Retry each failure at decreasing time intervals up to a maximum number of tries.

Full Access
Question # 11

You have a mixture of packaged and internally developed applications hosted on a Compute Engine instance that is running Linux. These applications write log records as text in local files. You want the logs to be written to Cloud Logging. What should you do?

A.

Pipe the content of the files to the Linux Syslog daemon.

B.

Install a Google version of fluentd on the Compute Engine instance.

C.

Install a Google version of collectd on the Compute Engine instance.

D.

Using cron, schedule a job to copy the log files to Cloud Storage once a day.

Full Access
Question # 12

You are a developer at a social media company The company runs their social media website on-premises and uses MySQL as a backend to store user profiles and user posts. Your company plans to migrate to Google Cloud, and your team will migrate user profile information to Firestore. You are tasked with designing the Firestore collections. What should you do?

A.

Create one root collection for user profiles, and store each user's post as a nested list in the user profile document.

B.

Create one root collection for user profiles, and create one root collection for user posts.

C.

Create one root collection for user profiles, and create one subcollection for each user's posts.

D.

Create one root collection for user posts, and create one subcollection for each user's profile

Full Access
Question # 13

You manage a microservice-based ecommerce platform on Google Cloud that sends confirmation emails to a third-party email service provider using a Cloud Function. Your company just launched a marketing campaign, and some customers are reporting that they have not received order confirmation emails. You discover that the services triggering the Cloud Function are receiving HTTP 500 errors. You need to change the way emails are handled to minimize email loss. What should you do?

A.

Increase the Cloud Function's timeout to nine minutes.

B.

Configure the sender application to publish the outgoing emails in a message to a Pub/Sub topic. Update the Cloud Function configuration to consume the Pub/Sub queue.

C.

Configure the sender application to write emails to Memorystore and then trigger the Cloud Function. When the function is triggered, it reads the email details from Memorystore and sends them to the email service.

D.

Configure the sender application to retry the execution of the Cloud Function every one second if a request fails.

Full Access
Question # 14

Your application is logging to Stackdriver. You want to get the count of all requests on all /api/alpha/*

endpoints.

What should you do?

A.

Add a Stackdriver counter metric for path:/api/alpha/.

B.

Add a Stackdriver counter metric for endpoint:/api/alpha/*.

C.

Export the logs to Cloud Storage and count lines matching /api/alphA.

D.

Export the logs to Cloud Pub/Sub and count lines matching /api/alphA.

Full Access
Question # 15

You have deployed a Java application to Cloud Run. Your application requires access to a database hosted on Cloud SQL Due to regulatory requirements: your connection to the Cloud SQL instance must use its internal IP address. How should you configure the connectivity while following Google-recommended best practices'?

A.

Configure your Cloud Run service with a Cloud SQL connection.

B.

Configure your Cloud Run service to use a Serverless VPC Access connector

C.

Configure your application to use the Cloud SQL Java connector

D.

Configure your application to connect to an instance of the Cloud SQL Auth proxy

Full Access
Question # 16

You recently joined a new team that has a Cloud Spanner database instance running in production. Your manager has asked you to optimize the Spanner instance to reduce cost while maintaining high reliability and availability of the database. What should you do?

A.

Use Cloud Logging to check for error logs, and reduce Spanner processing units by small increments until you find the minimum capacity required.

B.

Use Cloud Trace to monitor the requests per sec of incoming requests to Spanner, and reduce Spanner processing units by small increments until you find the minimum capacity required.

C.

Use Cloud Monitoring to monitor the CPU utilization, and reduce Spanner processing units by small increments until you find the minimum capacity required.

D.

Use Snapshot Debugger to check for application errors, and reduce Spanner processing units by small increments until you find the minimum capacity required.

Full Access
Question # 17

You are building a mobile application that will store hierarchical data structures in a database. The application will enable users working offline to sync changes when they are back online. A backend service will enrich the data in the database using a service account. The application is expected to be very popular and needs to scale seamlessly and securely. Which database and IAM role should you use?

A.

Use Cloud SQL, and assign the roles/cloudsql.editor role to the service account.

B.

Use Bigtable, and assign the roles/bigtable.viewer role to the service account.

C.

Use Firestore in Native mode and assign the roles/datastore.user role to the service account.

D.

Use Firestore in Datastore mode and assign the roles/datastore.viewer role to the service account.

Full Access
Question # 18

You are in the final stage of migrating an on-premises data center to Google Cloud. You are quickly approaching your deadline, and discover that a web API is running on a server slated for decommissioning. You need to recommend a solution to modernize this API while migrating to Google Cloud. The modernized web API must meet the following requirements:

• Autoscales during high traffic periods at the end of each month

• Written in Python 3.x

• Developers must be able to rapidly deploy new versions in response to frequent code changes

You want to minimize cost, effort, and operational overhead of this migration. What should you do?

A.

Modernize and deploy the code on App Engine flexible environment.

B.

Modernize and deploy the code on App Engine standard environment.

C.

Deploy the modernized application to an n1-standard-1 Compute Engine instance.

D.

Ask the development team to re-write the application to run as a Docker container on Google Kubernetes Engine.

Full Access
Question # 19

You manage your company's ecommerce platform's payment system, which runs on Google Cloud. Your company must retain user logs for 1 year for internal auditing purposes and for 3 years to meet compliance requirements. You need to store new user logs on Google Cloud to minimize on-premises storage usage and ensure that they are easily searchable. You want to minimize effort while ensuring that the logs are stored correctly. What should you do?

A.

Store the logs in a Cloud Storage bucket with bucket lock turned on.

B.

Store the logs in a Cloud Storage bucket with a 3-year retention period.

C.

Store the logs in Cloud Logging as custom logs with a custom retention period.

D.

Store the logs in a Cloud Storage bucket with a 1-year retention period. After 1 year, move the logs to another bucket with a 2-year retention period.

Full Access
Question # 20

You have an ecommerce application hosted in Google Kubernetes Engine (GKE) that receives external requests and forwards them to third-party APIs external to Google Cloud. The third-party APIs are responsible for credit card processing, shipping, and inventory management using the process shown in the diagram.

Your customers are reporting that the ecommerce application is running slowly at unpredictable times. The application doesn't report any metrics You need to determine the cause of the inconsistent performance What should you do?

A.

Install the Ops Agent inside your container and configure it to gather application metrics.

B.

Install the OpenTelemetry library for your respective language, and instrument your application.

C.

Modify your application to read and forward the x-Cloud-Trace-context header when it calls the

downstream services

D Enable Managed Service for Prometheus on the GKE cluster to gather application metrics.

Full Access
Question # 21

You are working on a new application that is deployed on Cloud Run and uses Cloud Functions Each time new features are added, new Cloud Functions and Cloud Run services are deployed You use ENV variables to keep track of the services and enable interservice communication but the maintenance of the ENV variables has become difficult. You want to implement dynamic discovery in a scalable way. What should you do?

A.

Create a Service Directory Namespace Use API calls to register the services during deployment, and query during runtime.

B.

Configure your microservices to use the Cloud Run Admin and Cloud Functions APIs to query for deployed Cloud Run services and Cloud Functions in the Google Cloud project.

C.

Deploy Hashicorp Consul on a single Compute Engine Instance Register the services with Consul during deployment and query during runtime

D.

Rename the Cloud Functions and Cloud Run services endpoints using a well-documented naming

convention

Full Access
Question # 22

The new version of your containerized application has been tested and is ready to deploy to production on Google Kubernetes Engine. You were not able to fully load-test the new version in pre-production environments, and you need to make sure that it does not have performance problems once deployed. Your deployment must be automated. What should you do?

A.

Use Cloud Load Balancing to slowly ramp up traffic between versions. Use Cloud Monitoring to look for performance issues.

B.

Deploy the application via a continuous delivery pipeline using canary deployments. Use Cloud Monitoring to look for performance issues. and ramp up traffic as the metrics support it.

C.

Deploy the application via a continuous delivery pipeline using blue/green deployments. Use Cloud Monitoring to look for performance issues, and launch fully when the metrics support it.

D.

Deploy the application using kubectl and set the spec.updateStrategv.type to RollingUpdate. Use Cloud Monitoring to look for performance issues, and run the kubectl rollback command if there are any issues.

Full Access
Question # 23

Your application requires service accounts to be authenticated to GCP products via credentials stored on its host Compute Engine virtual machine instances. You want to distribute these credentials to the host instances as securely as possible. What should you do?

A.

Use HTTP signed URLs to securely provide access to the required resources.

B.

Use the instance’s service account Application Default Credentials to authenticate to the required resources.

C.

Generate a P12 file from the GCP Console after the instance is deployed, and copy the credentials to the host instance before starting the application.

D.

Commit the credential JSON file into your application’s source repository, and have your CI/CD process package it with the software that is deployed to the instance.

Full Access
Question # 24

You have an application deployed in Google Kubernetes Engine (GKE) that reads and processes Pub/Sub messages. Each Pod handles a fixed number of messages per minute. The rate at which messages are published to the Pub/Sub topic varies considerably throughout the day and week, including occasional large batches of messages published at a single moment.

You want to scale your GKE Deployment to be able to process messages in a timely manner. What GKE feature should you use to automatically adapt your workload?

A.

Vertical Pod Autoscaler in Auto mode

B.

Vertical Pod Autoscaler in Recommendation mode

C.

Horizontal Pod Autoscaler based on an external metric

D.

Horizontal Pod Autoscaler based on resources utilization

Full Access
Question # 25

Your organization has recently begun an initiative to replatform their legacy applications onto Google Kubernetes Engine. You need to decompose a monolithic application into microservices. Multiple instances have read and write access to a configuration file, which is stored on a shared file system. You want to minimize the effort required to manage this transition, and you want to avoid rewriting the application code. What should you do?

A.

Create a new Cloud Storage bucket, and mount it via FUSE in the container.

B.

Create a new persistent disk, and mount the volume as a shared PersistentVolume.

C.

Create a new Filestore instance, and mount the volume as an NFS PersistentVolume.

D.

Create a new ConfigMap and volumeMount to store the contents of the configuration file.

Full Access
Question # 26

You are developing a microservice-based application that will run on Google Kubernetes Engine (GKE). Some of the services need to access different Google Cloud APIs. How should you set up authentication of these services in the cluster following Google-recommended best practices? (Choose two.)

A.

Use the service account attached to the GKE node.

B.

Enable Workload Identity in the cluster via the gcloud command-line tool.

C.

Access the Google service account keys from a secret management service.

D.

Store the Google service account keys in a central secret management service.

E.

Use gcloud to bind the Kubernetes service account and the Google service account using roles/iam.workloadIdentity.

Full Access
Question # 27

You developed a JavaScript web application that needs to access Google Drive’s API and obtain permission from users to store files in their Google Drives. You need to select an authorization approach for your application. What should you do?

A.

Create an API key.

B.

Create a SAML token.

C.

Create a service account.

D.

Create an OAuth Client ID.

Full Access
Question # 28

You are planning to deploy your application in a Google Kubernetes Engine (GKE) cluster. Your application

can scale horizontally, and each instance of your application needs to have a stable network identity and its

own persistent disk.

Which GKE object should you use?

A.

Deployment

B.

StatefulSet

C.

ReplicaSet

D.

ReplicaController

Full Access
Question # 29

You are using Cloud Build for your CI/CD pipeline to complete several tasks, including copying certain files to Compute Engine virtual machines. Your pipeline requires a flat file that is generated in one builder in the pipeline to be accessible by subsequent builders in the same pipeline. How should you store the file so that all the builders in the pipeline can access it?

A.

Store and retrieve the file contents using Compute Engine instance metadata.

B.

Output the file contents to a file in /workspace. Read from the same /workspace file in the subsequent build step.

C.

Use gsutil to output the file contents to a Cloud Storage object. Read from the same object in the subsequent build step.

D.

Add a build argument that runs an HTTP POST via curl to a separate web server to persist the value in one builder. Use an HTTP GET via curl from the subsequent build step to read the value.

Full Access
Question # 30

You are developing a single-player mobile game backend that has unpredictable traffic patterns as users interact with the game throughout the day and night. You want to optimize costs by ensuring that you have enough resources to handle requests, but minimize over-provisioning. You also want the system to handle traffic spikes efficiently. Which compute platform should you use?

A.

Cloud Run

B.

Compute Engine with managed instance groups

C.

Compute Engine with unmanaged instance groups

D.

Google Kubernetes Engine using cluster autoscaling

Full Access
Question # 31

You need to containerize a web application that will be hosted on Google Cloud behind a global load balancer with SSL certificates. You don't have the time to develop authentication at the application level, and you want to offload SSL encryption and management from your application. You want to configure the architecture using managed services where possible What should you do?

A.

Host the application on Compute Engine, and configure Cloud Endpoints for your application.

B.

Host the application on Google Kubernetes Engine and use Identity-Aware Proxy (IAP) with Cloud Load Balancing and Google-managed certificates.

C.

Host the application on Google Kubernetes Engine, and deploy an NGINX Ingress Controller to handle authentication.

D.

Host the application on Google Kubernetes Engine, and deploy cert-manager to manage SSL certificates.

Full Access
Question # 32

Your security team is auditing all deployed applications running in Google Kubernetes Engine. After completing the audit, your team discovers that some of the applications send traffic within the cluster in clear text. You need to ensure that all application traffic is encrypted as quickly as possible while minimizing changes to your applications and maintaining support from Google. What should you do?

A.

Use Network Policies to block traffic between applications.

B.

Install Istio, enable proxy injection on your application namespace, and then enable mTLS.

C.

Define Trusted Network ranges within the application, and configure the applications to allow traffic only from those networks.

D.

Use an automated process to request SSL Certificates for your applications from Let’s Encrypt and add them to your applications.

Full Access
Question # 33

You are reviewing and updating your Cloud Build steps to adhere to Google-recommended practices. Currently, your build steps include:

1. Pull the source code from a source repository.

2. Build a container image

3. Upload the built image to Artifact Registry.

You need to add a step to perform a vulnerability scan of the built container image, and you want the results of the scan to be available to your deployment pipeline running in Google Cloud. You want to minimize changes that could disrupt other teams' processes What should you do?

A.

Enable Binary Authorization, and configure it to attest that no vulnerabilities exist in a container image.

B.

Enable the Container Scanning API in Artifact Registry, and scan the built container images for vulnerabilities.

C.

Upload the built container images to your Docker Hub instance, and scan them for vulnerabilities.

D.

Add Artifact Registry to your Aqua Security instance, and scan the built container images for vulnerabilities

Full Access
Question # 34

You recently deployed a Go application on Google Kubernetes Engine (GKE). The operations team has noticed that the application's CPU usage is high even when there is low production traffic. The operations team has asked you to optimize your application's CPU resource consumption. You want to determine which Go functions consume the largest amount of CPU. What should you do?

A.

Deploy a Fluent Bit daemonset on the GKE cluster to log data in Cloud Logging. Analyze the logs to get insights into your application code’s performance.

B.

Create a custom dashboard in Cloud Monitoring to evaluate the CPU performance metrics of your application.

C.

Connect to your GKE nodes using SSH. Run the top command on the shell to extract the CPU utilization of your application.

D.

Modify your Go application to capture profiling data. Analyze the CPU metrics of your application in flame graphs in Profiler.

Full Access
Question # 35

You have an HTTP Cloud Function that is called via POST. Each submission’s request body has a flat, unnested JSON structure containing numeric and text data. After the Cloud Function completes, the collected data should be immediately available for ongoing and complex analytics by many users in parallel. How should you persist the submissions?

A.

Directly persist each POST request’s JSON data into Datastore.

B.

Transform the POST request’s JSON data, and stream it into BigQuery.

C.

Transform the POST request’s JSON data, and store it in a regional Cloud SQL cluster.

D.

Persist each POST request’s JSON data as an individual file within Cloud Storage, with the file name containing the request identifier.

Full Access
Question # 36

Your team develops services that run on Google Cloud. You need to build a data processing service and will use Cloud Functions. The data to be processed by the function is sensitive. You need to ensure that invocations can only happen from authorized services and follow Google-recommended best practices for securing functions. What should you do?

A.

Enable Identity-Aware Proxy in your project. Secure function access using its permissions.

B.

Create a service account with the Cloud Functions Viewer role. Use that service account to invoke the function.

C.

Create a service account with the Cloud Functions Invoker role. Use that service account to invoke the function.

D.

Create an OAuth 2.0 client ID for your calling service in the same project as the function you want to secure. Use those credentials to invoke the function.

Full Access
Question # 37

Your development team has been tasked with maintaining a .NET legacy application. The application incurs occasional changes and was recently updated. Your goal is to ensure that the application provides consistent results while moving through the CI/CD pipeline from environment to environment. You want to minimize the cost of deployment while making sure that external factors and dependencies between hosting environments are not problematic. Containers are not yet approved in your organization. What should you do?

A.

Rewrite the application using .NET Core, and deploy to Cloud Run. Use revisions to separate the environments.

B.

Use Cloud Build to deploy the application as a new Compute Engine image for each build. Use this image in each environment.

C.

Deploy the application using MS Web Deploy, and make sure to always use the latest, patched MS Windows Server base image in Compute Engine.

D.

Use Cloud Build to package the application, and deploy to a Google Kubernetes Engine cluster. Use namespaces to separate the environments.

Full Access
Question # 38

You want to notify on-call engineers about a service degradation in production while minimizing development

time.

What should you do?

A.

Use Cloud Function to monitor resources and raise alerts.

B.

Use Cloud Pub/Sub to monitor resources and raise alerts.

C.

Use Stackdriver Error Reporting to capture errors and raise alerts.

D.

Use Stackdriver Monitoring to monitor resources and raise alerts.

Full Access
Question # 39

Your company’s corporate policy states that there must be a copyright comment at the very beginning of all source files. You want to write a custom step in Cloud Build that is triggered by each source commit. You need the trigger to validate that the source contains a copyright and add one for subsequent steps if not there. What should you do?

A.

Build a new Docker container that examines the files in /workspace and then checks and adds a copyright for each source file. Changed files are explicitly committed back to the source repository.

B.

Build a new Docker container that examines the files in /workspace and then checks and adds a copyright for each source file. Changed files do not need to be committed back to the source repository.

C.

Build a new Docker container that examines the files in a Cloud Storage bucket and then checks and adds a copyright for each source file. Changed files are written back to the Cloud Storage bucket.

D.

Build a new Docker container that examines the files in a Cloud Storage bucket and then checks and adds a copyright for each source file. Changed files are explicitly committed back to the source repository.

Full Access
Question # 40

You are a developer at a large corporation You manage three Google Kubernetes Engine clusters. Your team’s developers need to switch from one cluster to another regularly without losing access to their preferred development tools. You want to configure access to these clusters using the fewest number of steps while following Google-recommended best practices. What should you do?

A.

Ask the developers to use Cloud Shell and run gcloud container clusters get-credentials to switch to another cluster.

B.

Ask the developers to open three terminals on their workstation and use kubecrt1 config to configure access to each cluster.

C.

Ask the developers to install the gcloud CLI on their workstation and run gcloud container clusters get-credentials to switch to another cluster

D.

In a configuration file, define the clusters users, and contexts Email the file to the developers and ask them to use kubect1 config to add cluster, user and context details.

Full Access
Question # 41

Which service should HipLocal use to enable access to internal apps?

A.

Cloud VPN

B.

Cloud Armor

C.

Virtual Private Cloud

D.

Cloud Identity-Aware Proxy

Full Access
Question # 42

HipLocal wants to reduce the number of on-call engineers and eliminate manual scaling.

Which two services should they choose? (Choose two.)

A.

Use Google App Engine services.

B.

Use serverless Google Cloud Functions.

C.

Use Knative to build and deploy serverless applications.

D.

Use Google Kubernetes Engine for automated deployments.

E.

Use a large Google Compute Engine cluster for deployments.

Full Access
Question # 43

In order to meet their business requirements, how should HipLocal store their application state?

A.

Use local SSDs to store state.

B.

Put a memcache layer in front of MySQL.

C.

Move the state storage to Cloud Spanner.

D.

Replace the MySQL instance with Cloud SQL.

Full Access
Question # 44

For this question, refer to the HipLocal case study.

Which Google Cloud product addresses HipLocal’s business requirements for service level indicators and objectives?

A.

Cloud Profiler

B.

Cloud Monitoring

C.

Cloud Trace

D.

Cloud Logging

Full Access
Question # 45

For this question, refer to the HipLocal case study.

HipLocal is expanding into new locations. They must capture additional data each time the application is launched in a new European country. This is causing delays in the development process due to constant schema changes and a lack of environments for conducting testing on the application changes. How should they resolve the issue while meeting the business requirements?

A.

Create new Cloud SQL instances in Europe and North America for testing and deployment. Provide developers with local MySQL instances to conduct testing on the application changes.

B.

Migrate data to Bigtable. Instruct the development teams to use the Cloud SDK to emulate a local Bigtable development environment.

C.

Move from Cloud SQL to MySQL hosted on Compute Engine. Replicate hosts across regions in the Americas and Europe. Provide developers with local MySQL instances to conduct testing on the application changes.

D.

Migrate data to Firestore in Native mode and set up instan

Full Access
Question # 46

For this question, refer to the HipLocal case study.

HipLocal's application uses Cloud Client Libraries to interact with Google Cloud. HipLocal needs to configure authentication and authorization in the Cloud Client Libraries to implement least privileged access for the application. What should they do?

A.

Create an API key. Use the API key to interact with Google Cloud.

B.

Use the default compute service account to interact with Google Cloud.

C.

Create a service account for the application. Export and deploy the private key for the application. Use the service account to interact with Google Cloud.

D.

Create a service account for the application and for each Google Cloud API used by the application. Export and deploy the private keys used by the application. Use the service account with one Google Cloud API to interact with Google Cloud.

Full Access
Question # 47

HipLocal has connected their Hadoop infrastructure to GCP using Cloud Interconnect in order to query data stored on persistent disks.

Which IP strategy should they use?

A.

Create manual subnets.

B.

Create an auto mode subnet.

C.

Create multiple peered VPCs.

D.

Provision a single instance for NAT.

Full Access
Question # 48

Which database should HipLocal use for storing user activity?

A.

BigQuery

B.

Cloud SQL

C.

Cloud Spanner

D.

Cloud Datastore

Full Access
Question # 49

In order for HipLocal to store application state and meet their stated business requirements, which database service should they migrate to?

A.

Cloud Spanner

B.

Cloud Datastore

C.

Cloud Memorystore as a cache

D.

Separate Cloud SQL clusters for each region

Full Access
Question # 50

HipLocal’s data science team wants to analyze user reviews.

How should they prepare the data?

A.

Use the Cloud Data Loss Prevention API for redaction of the review dataset.

B.

Use the Cloud Data Loss Prevention API for de-identification of the review dataset.

C.

Use the Cloud Natural Language Processing API for redaction of the review dataset.

D.

Use the Cloud Natural Language Processing API for de-identification of the review dataset.

Full Access
Question # 51

For this question, refer to the HipLocal case study.

A recent security audit discovers that HipLocal’s database credentials for their Compute Engine-hosted MySQL databases are stored in plain text on persistent disks. HipLocal needs to reduce the risk of these credentials being stolen. What should they do?

A.

Create a service account and download its key. Use the key to authenticate to Cloud Key Management Service (KMS) to obtain the database credentials.

B.

Create a service account and download its key. Use the key to authenticate to Cloud Key Management Service (KMS) to obtain a key used to decrypt the database credentials.

C.

Create a service account and grant it the roles/iam.serviceAccountUser role. Impersonate as this account and authenticate using the Cloud SQL Proxy.

D.

Grant the roles/secretmanager.secretAccessor role to the Compute Engine service account. Store and access the database credentials with the Secret Manager API.

Full Access
Question # 52

HipLocal wants to improve the resilience of their MySQL deployment, while also meeting their business and technical requirements.

Which configuration should they choose?

A.

Use the current single instance MySQL on Compute Engine and several read-only MySQL servers on

Compute Engine.

B.

Use the current single instance MySQL on Compute Engine, and replicate the data to Cloud SQL in an

external master configuration.

C.

Replace the current single instance MySQL instance with Cloud SQL, and configure high availability.

D.

Replace the current single instance MySQL instance with Cloud SQL, and Google provides redundancy

without further configuration.

Full Access
Question # 53

For this question, refer to the HipLocal case study.

How should HipLocal redesign their architecture to ensure that the application scales to support a large increase in users?

A.

Use Google Kubernetes Engine (GKE) to run the application as a microservice. Run the MySQL database on a dedicated GKE node.

B.

Use multiple Compute Engine instances to run MySQL to store state information. Use a Google Cloud-managed load balancer to distribute the load between instances. Use managed instance groups for scaling.

C.

Use Memorystore to store session information and CloudSQL to store state information. Use a Google Cloud-managed load balancer to distribute the load between instances. Use managed instance groups for scaling.

D.

Use a Cloud Storage bucket to serve the application as a static website, and use another Cloud Storage bucket to store user state information.

Full Access
Question # 54

HipLocal's APIs are showing occasional failures, but they cannot find a pattern. They want to collect some

metrics to help them troubleshoot.

What should they do?

A.

Take frequent snapshots of all of the VMs.

B.

Install the Stackdriver Logging agent on the VMs.

C.

Install the Stackdriver Monitoring agent on the VMs.

D.

Use Stackdriver Trace to look for performance bottlenecks.

Full Access
Question # 55

Which service should HipLocal use for their public APIs?

A.

Cloud Armor

B.

Cloud Functions

C.

Cloud Endpoints

D.

Shielded Virtual Machines

Full Access
Question # 56

For this question refer to the HipLocal case study.

HipLocal wants to reduce the latency of their services for users in global locations. They have created read replicas of their database in locations where their users reside and configured their service to read traffic using those replicas. How should they further reduce latency for all database interactions with the least amount of effort?

A.

Migrate the database to Bigtable and use it to serve all global user traffic.

B.

Migrate the database to Cloud Spanner and use it to serve all global user traffic.

C.

Migrate the database to Firestore in Datastore mode and use it to serve all global user traffic.

D.

Migrate the services to Google Kubernetes Engine and use a load balancer service to better scale the application.

Full Access
Question # 57

For this question, refer to the HipLocal case study.

How should HipLocal increase their API development speed while continuing to provide the QA team with a stable testing environment that meets feature requirements?

A.

Include unit tests in their code, and prevent deployments to QA until all tests have a passing status.

B.

Include performance tests in their code, and prevent deployments to QA until all tests have a passing status.

C.

Create health checks for the QA environment, and redeploy the APIs at a later time if the environment is unhealthy.

D.

Redeploy the APIs to App Engine using Traffic Splitting. Do not move QA traffic to the new versions if errors are found.

Full Access
Question # 58

HipLocal's.net-based auth service fails under intermittent load.

What should they do?

A.

Use App Engine for autoscaling.

B.

Use Cloud Functions for autoscaling.

C.

Use a Compute Engine cluster for the service.

D.

Use a dedicated Compute Engine virtual machine instance for the service.

Full Access
Question # 59

HipLocal is configuring their access controls.

Which firewall configuration should they implement?

A.

Block all traffic on port 443.

B.

Allow all traffic into the network.

C.

Allow traffic on port 443 for a specific tag.

D.

Allow all traffic on port 443 into the network.

Full Access