최신 DP-900 무료덤프 - Microsoft Azure Data Fundamentals
When provisioning an Azure Cosmos DB account, which feature provides redundancy within an Azure region?
정답: A
설명: (DumpTOP 회원만 볼 수 있음)
Which activity is most common for transactional workloads?
정답: D
Read the following statements:
(1) Azure Analysis services are used for transactional workloads.
(2) Azure Databricks is a collaborative analytics platform based on Apache Spark.
(3) Azure Data Factory orchestrates data ingestion workflows.
(4) Azure Synapse is an analytics service used to bring Big Data analytics and Enterprise Data Warehousing together.
Which of the following statements are true? Choose the correct option.
(1) Azure Analysis services are used for transactional workloads.
(2) Azure Databricks is a collaborative analytics platform based on Apache Spark.
(3) Azure Data Factory orchestrates data ingestion workflows.
(4) Azure Synapse is an analytics service used to bring Big Data analytics and Enterprise Data Warehousing together.
Which of the following statements are true? Choose the correct option.
정답: C
설명: (DumpTOP 회원만 볼 수 있음)
Complete the Sentence:
The MPP (Massively Parallel Processing) Engine of Azure Synapse Analytics
The MPP (Massively Parallel Processing) Engine of Azure Synapse Analytics
정답: A
설명: (DumpTOP 회원만 볼 수 있음)
Which Azure Cosmos DB API should you use for a graph database?
정답: C
SELECT, INSERT, and UPDATE are examples of which type of SQL statement?
정답: B
You need to ensure that users use multi-factor authentication (MFA) when connecting to an Azure SQL database.
Which type of authentication should you use?
Which type of authentication should you use?
정답: C
설명: (DumpTOP 회원만 볼 수 있음)
What are three characteristics of an Online Transaction Processing (OLTP) workload? Each correct answer presents a complete solution. (Choose three.) NOTE: Each correct selection is worth one point.
정답: B,C,D
설명: (DumpTOP 회원만 볼 수 있음)
A bank has a system that manages financial transactions.
When transferring money between accounts. the system must never retrieve a value for the source account that reflects the balance before the transfer and a value for the destination account that reflects the balance after the transfer.
Of which ACID semantic is this an example?
When transferring money between accounts. the system must never retrieve a value for the source account that reflects the balance before the transfer and a value for the destination account that reflects the balance after the transfer.
Of which ACID semantic is this an example?
정답: B
설명: (DumpTOP 회원만 볼 수 있음)
Hotspot Question
Select the answer that correctly completes the sentence.

Select the answer that correctly completes the sentence.

정답:

Explanation:
Semi-structured data (e.g., JSON, CSV, XML) is the bridge between structured and unstructured data. It does not have a predefined data model and is more complex than structured data, yet easier to store than unstructured data.
Reference:
https://www.ibm.com/cloud/blog/structured-vs-unstructured-data
Hotspot Question
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.

For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.

정답:

Explanation:
Box 1: Yes
Stream processing has access to the most recent data received or data within a rolling time window. Stream processing operates on data in near real-time, allowing for analysis and processing of data as it is received or within a defined time window.
Box 2: No
Batch processing is not required to occur immediately and can have higher latency. Batch processing typically operates on larger volumes of data and is often performed at regular intervals or in scheduled batches, which can have latency in the order of minutes, hours, or even days.
Box 3: Yes
Stream processing is commonly used for simple response functions, aggregates, or calculations such as rolling averages. It enables real-time data analysis and enables quick calculations and aggregations on streaming data as it arrives.
https://docs.microsoft.com/en-us/learn/modules/explore-fundamentals-stream-processing/2- batch-stream
https://www.precisely.com/blog/big-data/big-data-101-batch-stream-processing
Hotspot Question
For each of the following statements, select yes if he statement is true, Otherwise select No.
NOTE: Each correct selection is worth one point.

For each of the following statements, select yes if he statement is true, Otherwise select No.
NOTE: Each correct selection is worth one point.

정답:

Explanation:
Box 1: Yes
Azure Databricks can consume data from SQL Databases using JDBC and from SQL Databases using the Apache Spark connector.
The Apache Spark connector for Azure SQL Database and SQL Server enables these databases to act as input data sources and output data sinks for Apache Spark jobs.
Box 2: Yes
You can stream data into Azure Databricks using Event Hubs.
Box 3: Yes
You can run Spark jobs with data stored in Azure Cosmos DB using the Cosmos DB Spark connector. Cosmos can be used for batch and stream processing, and as a serving layer for low latency access.
You can use the connector with Azure Databricks or Azure HDInsight, which provide managed Spark clusters on Azure.
Reference:
https://docs.microsoft.com/en-us/azure/databricks/data/data-sources/sql-databases-azure
https://docs.microsoft.com/en-us/azure/databricks/scenarios/databricks-stream-from-eventhubs
Hotspot Question
For each of the following statements. select Yes if the statement is true. Otherwise, select No.
NOTE Each correct selection is worth one point.

For each of the following statements. select Yes if the statement is true. Otherwise, select No.
NOTE Each correct selection is worth one point.

정답:

Explanation:
Box 1: No
The API determines the type of account to create. Azure Cosmos DB provides five APIs: Core (SQL) and MongoDB for document data, Gremlin for graph data, Azure Table, and Cassandra.
Currently, you must create a separate account for each API.
Box 2: Yes
Azure Cosmos DB uses partitioning to scale individual containers in a database to meet the performance needs of your application. In partitioning, the items in a container are divided into distinct subsets called logical partitions. Logical partitions are formed based on the value of a partition key that is associated with each item in a container.
Box 3: No
Logical partitions are formed based on the value of a partition key that is associated with each item in a container.
Reference:
https://docs.microsoft.com/en-us/azure/cosmos-db/partitioning-overview