Certified 70-475 Exam Questions 2019

Act now and download your 70 475 exam today! Do not waste time for the worthless 70 475 exam tutorials. Download microsoft 70 475 with real questions and answers and begin to learn 70 475 exam with a classic professional.

Online Microsoft 70-475 free dumps demo Below:

NEW QUESTION 1
You have a Microsoft Azure SQL database that contains Personally Identifiable Information (PII).
To mitigate the PII risk, you need to ensure that data is encrypted while the data is at rest. The solution must minimize any changes to front-end applications.
What should you use?

  • A. Transport Layer Security (TLS)
  • B. transparent data encryption (TDE)
  • C. a shared access signature (SAS)
  • D. the ENCRYPTBYPASSPHRASE T-SQL function

Answer: B

Explanation: Transparent data encryption (TDE) helps protect Azure SQL Database, Azure SQL Managed Instance, and Azure Data Warehouse against the threat of malicious activity. It performs real-time encryption and decryption of the database, associated backups, and transaction log files at rest without requiring changes to the application.
References: https://docs.microsoft.com/en-us/azure/sql-database/transparent-data-encryption-azure-sql

NEW QUESTION 2
You have data in an on-premises Microsoft SQL Server database.
You must ingest the data in Microsoft Azure Blob storage from the on-premises SQL Server database by using Azure Data Factory.
You need to identify which tasks must be performed from Azure.
In which sequence should you perform the actions? To answer, move all of the actions from the list of actions to the answer area and arrange them in the correct order.
NOTE: More than one order of answer choices is correct. You will receive credit for any of the correct orders you select.
70-475 dumps exhibit

    Answer:

    Explanation: Step 1: Configure a Microsoft Data Management Gateway Install and configure Azure Data Factory Integration Runtime.
    The Integration Runtime is a customer managed data integration infrastructure used by Azure Data Factory to provide data integration capabilities across different network environments. This runtime was formerly called "Data Management Gateway".
    Step 2: Create a linked service for Azure Blob storage
    Create an Azure Storage linked service (destination/sink). You link your Azure storage account to the data factory.
    Step 3: Create a linked service for SQL Server
    Create and encrypt a SQL Server linked service (source)
    In this step, you link your on-premises SQL Server instance to the data factory. Step 4: Create an input dataset and an output dataset.
    Create a dataset for the source SQL Server database. In this step, you create input and output datasets. They represent input and output data for the copy operation, which copies data from the on-premises SQL Server database to Azure Blob storage.
    Step 5: Create a pipeline..
    You create a pipeline with a copy activity. The copy activity uses SqlServerDataset as the input dataset and AzureBlobDataset as the output dataset. The source type is set to SqlSource and the sink type is set to BlobSink.
    References: https://docs.microsoft.com/en-us/azure/data-factory/tutorial-hybrid-copy-powershell

    NEW QUESTION 3
    You have a Microsoft Azure Stream Analytics job that contains several pipelines.
    The Stream Analytics job is configured to trigger an alert when the sale of products in specific categories exceeds a specified threshold.
    You plan to change the product-to-category mappings next month to meet future business requirements.
    You need to create the new product-to-category mappings to prepare for the planned change. The solution must ensure that the Stream Analytics job only uses the new product-to-category mappings when the
    mappings are ready to be activated.
    Which naming structure should you use for the file that contains the product-to-category mappings?

    • A. Use any date after the day the file becomes active.
    • B. Use any date before the day the categories become active.
    • C. Use the date and hour that the categories are to become active.
    • D. Use the current date and time.

    Answer: C

    NEW QUESTION 4
    Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
    After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
    Your company has multiple databases that contain millions of sales transactions. You plan to implement a data mining solution to identity purchasing fraud.
    You need to design a solution that mines 10 terabytes (TB) of sales data. The solution must meet the following requirements:
    • Run the analysis to identify fraud once per week.
    • Continue to receive new sales transactions while the analysis runs.
    • Be able to stop computing services when the analysis is NOT running.
    Solution: You create a Cloudera Hadoop cluster on Microsoft Azure virtual machines. Does this meet the goal?

    • A. Yes
    • B. No

    Answer: A

    Explanation: Processing large amounts of unstructured data requires serious computing power and also maintenance effort. As load on computing power typically fluctuates due to time and seasonal influences and/or processes running on certain times, a cloud solution like Microsoft Azure is a good option to be able to scale up easily and pay only for what is actually used.

    NEW QUESTION 5
    You plan to use Microsoft Azure IoT Hub to capture data from medical devices that contain sensors. You need to ensure that each device has its own credentials. The solution must minimize the number of
    required privileges.
    Which policy should you apply to the devices?

    • A. iothubowner
    • B. service
    • C. registryReadWrite
    • D. device

    Answer: D

    Explanation: Per-Device Security Credentials. Each IoT Hub contains an identity registry For each device in this identity registry, you can configure security credentials that grant DeviceConnect permissions scoped to the corresponding device endpoints.

    NEW QUESTION 6
    You have a Microsoft Azure data factory.
    You assign administrative roles to the users in the following table.
    70-475 dumps exhibit
    You discover that several new data factory instances were created.
    You need to ensure that only User5 can create a new data factory instance.
    Which two roles should you change? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point.

    • A. User2 to Reader
    • B. User3 to Contributor
    • C. User1 to Reader
    • D. User4 to Contributor
    • E. User5 to Administrator

    Answer: AC

    NEW QUESTION 7
    You have a Microsoft Azure Data Factory pipeline.
    You discover that the pipeline fails to execute because data is missing. You need to rerun the failure in the pipeline.
    Which cmdlet should you use?

    • A. Set-AzureRmAutomationJob
    • B. Set-AzureRmDataFactorySliceStatus
    • C. Resume-AzureRmDataFactoryPipeline
    • D. Resume-AzureRmAutomationJob

    Answer: B

    Explanation: Use some PowerShell to inspect the ADF activity for the missing file error. Then simply set the dataset slice to either skipped or ready using the cmdlet to override the status.
    For example:
    Set-AzureRmDataFactorySliceStatus `
    -ResourceGroupName $ResourceGroup `
    -DataFactoryName $ADFName.DataFactoryName `
    -DatasetName $Dataset.OutputDatasets `
    -StartDateTime $Dataset.WindowStart `
    -EndDateTime $Dataset.WindowEnd `
    -Status "Ready" `
    -UpdateType "Individual" References:
    https://stackoverflow.com/questions/42723269/azure-data-factory-pipelines-are-failing-when-no-files-available-

    NEW QUESTION 8
    You are designing an Apache HBase cluster on Microsoft Azure HDInsight. You need to identify which nodes are required for the cluster.
    Which three nodes should you identify? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point.

    • A. Nimbus
    • B. Zookeeper
    • C. Region
    • D. Supervisor
    • E. Falcon
    • F. Head

    Answer: BCF

    Explanation: https://docs.microsoft.com/en-us/azure/hdinsight/hdinsight-hadoop-provision-linux-clusters

    NEW QUESTION 9
    Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the states goals. Some question sets might have more than one correct solution, while the others might not have a correct solution.
    After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
    You have an Apache Spark system that contains 5 TB of data.
    You need to write queries that analyze the data in the system. The queries must meet the following requirements:
    70-475 dumps exhibit Use static data typing.
    70-475 dumps exhibit Execute queries as quickly as possible.
    70-475 dumps exhibit Have access to the latest language features.
    Solution: You write the queries by using Python.

    • A. Yes
    • B. No

    Answer: B

    NEW QUESTION 10
    You plan to deploy Microsoft Azure HDInsight clusters for business analytics and data pipelines. The clusters must meet the following requirements:
    70-475 dumps exhibit Business users must use a language that is similar to SQL.
    70-475 dumps exhibit The authoring of data pipelines must occur in a dataflow language. You need to identify which language must be used for each requirement.
    Which languages should you identify? To answer, drag the appropriate languages to the correct requirements. Each language may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
    70-475 dumps exhibit

      Answer:

      Explanation: 70-475 dumps exhibit

      NEW QUESTION 11
      You plan to implement a Microsoft Azure Data Factory pipeline. The pipeline will have custom business logic that requires a custom processing step.
      You need to implement the custom processing step by using C#.
      Which interface and method should you use? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
      70-475 dumps exhibit

        Answer:

        Explanation: References:
        https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/data-factory/v1/data-factory-use-custom-activ

        NEW QUESTION 12
        You are designing a partitioning scheme for ingesting real-time data by using Kafka. Kafka and Apache Storm will be integrated. You plan to use four event processing servers that each run as a Kafka consumer. Each server will have a two quad-core processor. You need to identify the minimum number of partitions required to ensure that the load is distributed evenly. How many should you identify?

        • A. 1
        • B. 4
        • C. 16
        • D. 32

        Answer: B

        NEW QUESTION 13
        You are designing an Internet of Things (IoT) solution intended to identify trends. The solution requires the
        real-time analysis of data originating from sensors. The results of the analysis will be stored in a SQL database.
        You need to recommend a data processing solution that uses the Transact-SQL language. Which data processing solution should you recommend?

        • A. Microsoft Azure Stream Analytics
        • B. Microsoft Azure HDInsight Spark clusters
        • C. Microsoft Azure Event Hubs
        • D. Microsoft Azure HDInsight Hadoop clusters

        Answer: A

        Explanation: For your Internet of Things (IoT) scenarios that use Event Hubs, Azure Stream Analytics can serve as a possible first step to perform near real-time analytics on telemetry data. Just like Event Hubs, Steam Analytics supports the streaming of millions of event per second. Unlike a standard database, analysis is performed on data in motion. This streaming input data can also be combined with reference data inputs to perform lookups or do correlation to assist in unlocking business insights. It uses a SQL-like language to simplify the analysis of data inputs and detect anomalies, trigger alerts or transform the data in order to create valuable outputs

        NEW QUESTION 14
        You need to recommend a data handling solution to support the planned changes to the dashboard. The solution must meet the privacy requirements.
        What is the best recommendation to achieve the goal? More than one answer choice may achieve the goal. Select the BEST answer.

        • A. anonymization
        • B. encryption
        • C. obfuscation
        • D. compression

        Answer: C

        NEW QUESTION 15
        You have a Microsoft Azure Machine Learning Solution that contains several Azure Data Factory pipeline jobs.
        You discover that the jobs for a dataset named CustomerSalesData fails. You resolve the issue that caused the job to fail.
        You need to rerun the slices for CustomerSalesData. What should you do?

        • A. Run the Set-AzureRMDataFactorySliceStatus cmdlet and specify the–Status Retry parameter.
        • B. Run the Set-AzureRMDataFactorySliceStatus cmdlet and specify the–Status PendingExecution parameter.
        • C. Run the Resume-AzureRMDataFactoryPipeline cmdlet and specify the–Status Retry parameter.
        • D. Run the Resume-AzureRMDataFactoryPipeline cmdlet and specify the–Status PendingExecution parameter.

        Answer: B

        NEW QUESTION 16
        You plan to deploy a Microsoft Azure Data Factory pipeline to run an end-to-end data processing workflow. You need to recommend winch Azure Data Factory features must be used to meet the Following requirements: Track the run status of the historical activity.
        Enable alerts and notifications on events and metrics.
        Monitor the creation, updating, and deletion of Azure resources.
        Which features should you recommend? To answer, drag the appropriate features to the correct requirements. Each feature may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
        NOTE: Each correct selection is worth one point.
        70-475 dumps exhibit

          Answer:

          Explanation: Box 1: Azure Hdinsight logs Logs contain historical activities. Box 2: Azure Data Factory alerts Box 3: Azure Data Factory events

          NEW QUESTION 17
          You have an Apache Storm cluster.
          The cluster will ingest data from a Microsoft Azure event hub.
          The event hub has the characteristics described in the following table.
          70-475 dumps exhibit
          You are designing the Storm application topology.
          You need to ingest data from all of the partitions. The solution must maximize the throughput of the data ingestion.
          Which setting should you use?

          • A. Partition Count
          • B. Message Retention
          • C. Partition Key
          • D. Shared access policies

          Answer: A

          100% Valid and Newest Version 70-475 Questions & Answers shared by Certleader, Get Full Dumps HERE: https://www.certleader.com/70-475-dumps.html (New 102 Q&As)