Easter Sale - Special 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: 70dumps

1z0-1127-25 Questions and Answers

Question # 6

What is the purpose of embeddings in natural language processing?

A.

To increase the complexity and size of text data

B.

To translate text into a different language

C.

To create numerical representations of text that capture the meaning and relationships between words or phrases

D.

To compress text data into smaller files for storage

Full Access
Question # 7

How does a presence penalty function in language model generation?

A.

It penalizes all tokens equally, regardless of how often they have appeared.

B.

It penalizes only tokens that have never appeared in the text before.

C.

It applies a penalty only if the token has appeared more than twice.

D.

It penalizes a token each time it appears after the first occurrence.

Full Access
Question # 8

What distinguishes the Cohere Embed v3 model from its predecessor in the OCI Generative AI service?

A.

Support for tokenizing longer sentences

B.

Improved retrievals for Retrieval Augmented Generation (RAG) systems

C.

Emphasis on syntactic clustering of word embeddings

D.

Capacity to translate text in over 100 languages

Full Access
Question # 9

What issue might arise from using small datasets with the Vanilla fine-tuning method in the OCI Generative AI service?

A.

Overfitting

B.

Underfitting

C.

Data Leakage

D.

Model Drift

Full Access
Question # 10

What does the term "hallucination" refer to in the context of Large Language Models (LLMs)?

A.

The model's ability to generate imaginative and creative content

B.

A technique used to enhance the model's performance on specific tasks

C.

The process by which the model visualizes and describes images in detail

D.

The phenomenon where the model generates factually incorrect information or unrelated content as if it were true

Full Access
Question # 11

Which is NOT a typical use case for LangSmith Evaluators?

A.

Measuring coherence of generated text

B.

Aligning code readability

C.

Evaluating factual accuracy of outputs

D.

Detecting bias or toxicity

Full Access
Question # 12

Given the following code:

PromptTemplate(input_variables=["human_input", "city"], template=template)

Which statement is true about PromptTemplate in relation to input_variables?

A.

PromptTemplate requires a minimum of two variables to function properly.

B.

PromptTemplate can support only a single variable at a time.

C.

PromptTemplate supports any number of variables, including the possibility of having none.

D.

PromptTemplate is unable to use any variables.

Full Access
Question # 13

What does in-context learning in Large Language Models involve?

A.

Pretraining the model on a specific domain

B.

Training the model using reinforcement learning

C.

Conditioning the model with task-specific instructions or demonstrations

D.

Adding more layers to the model

Full Access
Question # 14

What is the primary function of the "temperature" parameter in the OCI Generative AI Generation models?

A.

Controls the randomness of the model's output, affecting its creativity

B.

Specifies a string that tells the model to stop generating more content

C.

Assigns a penalty to tokens that have already appeared in the preceding text

D.

Determines the maximum number of tokens the model can generate per response

Full Access
Question # 15

What differentiates Semantic search from traditional keyword search?

A.

It relies solely on matching exact keywords in the content.

B.

It depends on the number of times keywords appear in the content.

C.

It involves understanding the intent and context of the search.

D.

It is based on the date and author of the content.

Full Access
Question # 16

How does the temperature setting in a decoding algorithm influence the probability distribution over the vocabulary?

A.

Increasing temperature removes the impact of the most likely word.

B.

Decreasing temperature broadens the distribution, making less likely words more probable.

C.

Increasing temperature flattens the distribution, allowing for more varied word choices.

D.

Temperature has no effect on the probability distribution; it only changes the speed of decoding.

Full Access
Question # 17

Which statement is true about Fine-tuning and Parameter-Efficient Fine-Tuning (PEFT)?

A.

Fine-tuning requires training the entire model on new data, often leading to substantial computational costs, whereas PEFT involves updating only a small subset of parameters, minimizing computational requirements and data needs.

B.

PEFT requires replacing the entire model architecture with a new one designed specifically for the new task, making it significantly more data-intensive than Fine-tuning.

C.

Both Fine-tuning and PEFT require the model to be trained from scratch on new data, making them equally data and computationally intensive.

D.

Fine-tuning and PEFT do not involve model modification; they differ only in the type of data used for training, with Fine-tuning requiring labeled data and PEFT using unlabeled data.

Full Access
Question # 18

What is prompt engineering in the context of Large Language Models (LLMs)?

A.

Iteratively refining the ask to elicit a desired response

B.

Adding more layers to the neural network

C.

Adjusting the hyperparameters of the model

D.

Training the model on a large dataset

Full Access
Question # 19

What is the purpose of the "stop sequence" parameter in the OCI Generative AI Generation models?

A.

It specifies a string that tells the model to stop generating more content.

B.

It assigns a penalty to frequently occurring tokens to reduce repetitive text.

C.

It determines the maximum number of tokens the model can generate per response.

D.

It controls the randomness of the model’s output, affecting its creativity.

Full Access
Question # 20

Which is a key characteristic of the annotation process used in T-Few fine-tuning?

A.

T-Few fine-tuning uses annotated data to adjust a fraction of model weights.

B.

T-Few fine-tuning requires manual annotation of input-output pairs.

C.

T-Few fine-tuning involves updating the weights of all layers in the model.

D.

T-Few fine-tuning relies on unsupervised learning techniques for annotation.

Full Access
Question # 21

Which is the main characteristic of greedy decoding in the context of language model word prediction?

A.

It chooses words randomly from the set of less probable candidates.

B.

It requires a large temperature setting to ensure diverse word selection.

C.

It selects words based on a flattened distribution over the vocabulary.

D.

It picks the most likely word at each step of decoding.

Full Access
Question # 22

Given the following code:

chain = prompt | llm

Which statement is true about LangChain Expression Language (LCEL)?

A.

LCEL is a programming language used to write documentation for LangChain.

B.

LCEL is a legacy method for creating chains in LangChain.

C.

LCEL is a declarative and preferred way to compose chains together.

D.

LCEL is an older Python library for building Large Language Models.

Full Access
Question # 23

How are fine-tuned customer models stored to enable strong data privacy and security in the OCI Generative AI service?

A.

Shared among multiple customers for efficiency

B.

Stored in Object Storage encrypted by default

C.

Stored in an unencrypted form in Object Storage

D.

Stored in Key Management service

Full Access
Question # 24

What does a higher number assigned to a token signify in the "Show Likelihoods" feature of the language model token generation?

A.

The token is less likely to follow the current token.

B.

The token is more likely to follow the current token.

C.

The token is unrelated to the current token and will not be used.

D.

The token will be the only one considered in the next generation step.

Full Access
Question # 25

Which statement accurately reflects the differences between these approaches in terms of the number of parameters modified and the type of data used?

A.

Fine-tuning and continuous pretraining both modify all parameters and use labeled, task-specific data.

B.

Parameter Efficient Fine-Tuning and Soft Prompting modify all parameters of the model using unlabeled data.

C.

Fine-tuning modifies all parameters using labeled, task-specific data, whereas Parameter Efficient Fine-Tuning updates a few, new parameters also with labeled, task-specific data.

D.

Soft Prompting and continuous pretraining are both methods that require no modification to the original parameters of the model.

Full Access
Question # 26

What does the Loss metric indicate about a model's predictions?

A.

Loss measures the total number of predictions made by a model.

B.

Loss is a measure that indicates how wrong the model's predictions are.

C.

Loss indicates how good a prediction is, and it should increase as the model improves.

D.

Loss describes the accuracy of the right predictions rather than the incorrect ones.

Full Access