AB-731

Practice AB-731 Exam

Is it difficult for you to decide to purchase Microsoft AB-731 exam dumps questions? CertQueen provides FREE online AI Transformation Leader (beta) AB-731 exam questions below, and you can test your AB-731 skills first, and then decide whether to buy the full version or not. We promise you get the following advantages after purchasing our AB-731 exam dumps questions.
1.Free update in ONE year from the date of your purchase.
2.Full payment fee refund if you fail AB-731 exam with the dumps

 

 Full AB-731 Exam Dump Here

Latest AB-731 Exam Dumps Questions

The dumps for AB-731 exam was last updated on Apr 11,2026 .

Viewing page 1 out of 1 pages.

Viewing questions 1 out of 9 questions

Question#1

Your company manages a website that publishes daily news articles. You need to recommend an AI solution that can analyze text and identify the main people, locations, and companies mentioned in the articles.
What should you include in the recommendation?

A. Azure Language in Foundry Tools
B. Content Safety in Foundry Control Plane
C. Azure Vision in Foundry Tools
D. Azure Speech in Foundry Tools

Explanation:
The requirement is to analyze text and identify “people, locations, and companies” mentioned in news articles. This is a classic Named Entity Recognition (NER) / entity extraction scenario, which falls under natural language processing. Azure Language in Foundry Tools is the correct choice because it provides text analytics capabilities that detect and categorize entities in unstructured text― commonly including Person, Location, and Organization. This enables downstream experiences such as topic tagging, search filters (e.g., “all articles mentioning Company X”), trend dashboards (top people/places mentioned this week), and improved content discovery.
The other options do not match the requirement. Content Safety focuses on moderating harmful or policy-violating content (for example, hate, violence, self-harm, sexual content) and is not the primary tool for extracting named entities. Azure Vision is for analyzing images and performing OCR; it would only be relevant if the articles were images or scans, but the task here is entity extraction from text articles. Azure Speech is for speech-to-text, text-to-speech, and audio analysis; it would be used if your input were audio recordings rather than written articles.
Therefore, to identify key entities (people, locations, companies) from daily news article text, the best recommendation is Azure Language in Foundry Tools.

Question#2

Your company creates a custom Azure Machine Learning model that uses a generative AI assistant. The model initially delivers strong results. However, six months later, the model predictions become noticeably less accurate.
What is a possible cause of the issue?

A. The input data changed over time.
B. The model requires additional compute resources.
C. The model was trained incorrectly.

Explanation:
A common reason models degrade after being successful in production is data drift (also called concept drift). Over time, the distribution of input data changes―for example, customer behavior shifts, product catalog changes, seasonality changes, new categories appear, sensors get recalibrated, or business processes evolve. When the model sees data that differs from what it was trained on, its predictions can become less accurate. This is exactly what option A describes and is the most likely “six months later” cause.
Option B is not a primary explanation for reduced predictive accuracy. More compute can improve throughput/latency, but it does not inherently improve correctness of predictions. If anything, compute constraints typically cause timeouts or slower responses, not a systematic accuracy drop.
Option C (trained incorrectly) would usually manifest earlier―poor performance from the start― unless the “incorrectness” is that the model was trained on a snapshot that later became stale (which again maps back to drift). The correct operational response is to monitor for drift, validate performance regularly, and retrain/refresh the model using newer representative data and updated features/labels.

Question#3

HOTSPOT
For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point.


A. 

Explanation:
Answer Area
Microsoft 365 Copilot enables users to search across emails, files, chats, meetings, and connected apps, by using natural language queries.
Answer. Yes
Microsoft 365 Copilot provides data protection, and helps ensure that user prompts and chat data are secure.
Answer. Yes
Microsoft 365 Copilot is included in all Microsoft 365 subscriptions.
Answer. No
Yes ― Microsoft 365 Copilot is grounded in Microsoft 365 content through Microsoft Graph and is designed to help users find and summarize information across their work context (emails, files, chats, meetings). With extensions/connectors, Copilot can also incorporate information from connected apps, allowing users to query using natural language and get summarized, actionable outputs.
Yes ― A key enterprise benefit of Microsoft 365 Copilot is that it operates within the Microsoft 365 security, identity, and compliance boundary. It respects existing permissions (users only get content they are authorized to access) and supports organizational controls intended to protect prompts and generated content. This is part of the business value compared to consumer tools: IT can apply policies, auditing, and compliance controls to reduce data risk.
No ― Microsoft 365 Copilot is not automatically included in every Microsoft 365 subscription. In most business scenarios it is licensed as an add-on (or separate Copilot entitlement) for eligible Microsoft 365 plans, meaning it requires explicit licensing rather than being universally bundled with all subscriptions.

Question#4

HOTSPOT
Select the answer that correctly completes the sentence.
The cost of using generative AI language models is based typically on the number of __________ processed.


A. 

Explanation:
Most generative AI language model pricing is based on token consumption, which measures the amount of text processed by the model. Tokens are sub-word units used internally by language models (for example, parts of words, whole words, or punctuation). When you send a prompt, the model consumes input tokens (your prompt + any system instructions + retrieved grounding context). When it generates a response, it consumes output tokens (the generated completion). Costs typically scale with the total input + output tokens processed, which is why long prompts, large grounding passages, and lengthy responses increase spend. This also explains why prompt optimization, response length limits, caching, and careful grounding are common cost-control techniques in enterprise solutions.
By contrast, “documents” is too coarse (a document can be 1 page or 500 pages). “Requests” is not the primary unit for most LLM pricing models because request sizes vary dramatically. “Words” is not used because the model’s actual compute unit is tokens, and tokenization differs across languages and text patterns. Therefore, the most accurate completion is tokens.

Question#5

HOTSPOT
HOTSPOT
For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point.


A. 

Explanation:
Answer Area
You can use Azure Language in Foundry Tools to analyze the sentiment of customer reviews.
Answer. Yes
You can use Azure Language in Foundry Tools to translate internal reports into multiple languages.
Answer. No
You can use Azure Language in Foundry Tools to extract text from scanned documents.
Answer. No
Azure Language is designed for natural language processing (NLP) over text that is already machine-readable. That includes capabilities like sentiment analysis, key phrase extraction, entity recognition, summarization, and classification. Therefore, statement 1 is Yes: sentiment analysis of customer reviews is a standard NLP workload where the service scores text as positive/negative/neutral (and often provides confidence scores), helping organizations quantify customer satisfaction and detect recurring issues.
Statement 2 is No because translation is typically handled by a dedicated translation capability (commonly delivered as a separate translator service) rather than the core “Language” NLP features. While translation is an AI language workload, it’s not what the Azure Language service is primarily used for in this context; the expected Microsoft service choice for multi-language translation is the translator capability, not Azure Language.
Statement 3 is No because extracting text from scanned documents is OCR (optical character recognition), which is a computer vision/document processing function. OCR is delivered through Azure Vision and/or Azure Document Intelligence, which can read printed/handwritten text from images and PDFs and return structured output. Azure Language can analyze extracted text after OCR, but it does not perform the image-to-text extraction step itself.

Exam Code: AB-731         Q & A: 77 Q&As         Updated:  Apr 11,2026

 

 Full AB-731 Exam Dumps Here