Microsoft 70-765 Dumps Questions 2019

Master the content and be ready for exam day success quickly with this . We guarantee it!We make it a reality and give you real in our Microsoft 70-765 braindumps. Latest 100% VALID at below page. You can use our Microsoft 70-765 braindumps and pass your exam.

Online Microsoft 70-765 free dumps demo Below:

You administer a Microsoft SQL Server 2014 failover cluster.
You need to ensure that a failover occurs when the server diagnostics returns query_processing error. Which server configuration property should you set?

  • A. SqlOumperDumpFlags
  • B. FailureConditionLevel
  • C. HealthCheckTimeout
  • D. SqlDumperDumpPath

Answer: B

Explanation: Use the FailureConditionLevel property to set the conditions for the Always On Failover Cluster Instance (FCI) to fail over or restart.
The failure conditions are set on an increasing scale. For levels 1-5, each level includes all the conditions from the previous levels in addition to its own conditions.
Note: The system stored procedure sp_server_diagnostics periodically collects component diagnostics on the SQL instance. The diagnostic information that is collected is surfaced as a row for each of the following components and passed to the calling thread.
The system, resource, and query process components are used for failure detection. The io_subsytem and events components are used for diagnostic purposes only.

You administer a Microsoft SQL Server 2014 database named Contoso on a server named Server01.
You need to collect data for a long period of time to troubleshoot wait statistics when querying Contoso. You also need to ensure minimum impact to the server.
What should you create?

  • A. An Alert
  • B. A Resource Pool
  • C. An Extended Event session
  • D. A Server Audit Specification
  • E. A SQL Profiler Trace
  • F. A Database Audit Specification
  • G. A Policy
  • H. A Data Collector Set

Answer: C

Explanation: SQL Server Extended Events has a highly scalable and highly configurable architecture that allows users to collect as much or as little information as is necessary to troubleshoot or identify a performance problem.
Extended Events is a light weight performance monitoring system that uses very few performance resources. A SQL Server Extended Events session is created in the SQL Server process hosting the Extended Events

You have a Microsoft SQL Server 2014 named SRV2014 that has a single tempdb database file. The tempdb database file is eight gigabytes (GB) in size.
You install a SQL Server 2016 instance named SQL Server 2016 by using default settings. The new instance has eight logical processor cores.
You plan to migrate the databases from SRV2014 to SRV2016.
You need to configure the tempdb database on SRV2016. The solution must minimize the number of future tempdb autogrowth events.
What should you do?

  • A. Increase the size of the tempdb datafile to 8 G
  • B. In the tempdb database, set the value of the MAXDOP property to8.
  • C. Increase the size of the tempdb data files to1 GB.
  • D. Add seven additional tempdb data file
  • E. In the tempdb database, set the value of the MAXDOP property to8.
  • F. Setthe value for the autogrowth setting for the tempdb data file to128megabytes (MB). Add seven additional tempdb data files and set the autogrowth value to128 MB.

Answer: B

Explanation: In an effort to simplify the tempdb configuration experience, SQL Server 2016 setup has been extended to configure various properties for tempdb for multi-processor environments.
1. A new tab dedicated to tempdb has been added to the Database Engine Configuration step of setup workflow.
2. Configuration options: Data Files
* Number offiles – this will default to the lower value of 8 or number of logical cores as detected by setup.
* Initial size – is specified in MB and applies to each tempdb data file. This makes it easier to configure all files of same size. Total initial size is the cumulative tempdb data file size (Number of files * Initial Size) that will be created.
* Autogrowth – is specified in MB (fixed growth is preferred as opposed to a non-linear percentage based growth) and applies to each file. The default value of 64MBwas chosen to cover one PFS interval.
70-765 dumps exhibit

You use a Microsoft SQL Server 2014 database that contains two tables named SalesOrderHeader and SalesOrderDetail. The indexes on the tables are as shown in the exhibit.
(Click the Exhibit button.)
70-765 dumps exhibit
You write the following Transact-SQL query:
70-765 dumps exhibit
You discover that the performance of the query is slow. Analysis of the query plan shows table scans where the estimated rows do not match the actual rows for SalesOrderHeader by using an unexpected index on SalesOrderDetail.
You need to improve the performance of the query. What should you do?

  • A. Use a FORCESCAN hint in the query.
  • B. Add a clustered index on SalesOrderId in SalesOrderHeader.
  • C. Use a FORCESEEK hint in the query.
  • D. Update statistics on SalesOrderId on both tables.

Answer: D

Explanation: New statistics would be useful.
The UPDATE STATISTICS command updates query optimization statistics on a table or indexed view. By default, the query optimizer already updates statistics as necessary to improve the query plan; in some cases you can improve query performance by using UPDATE STATISTICS or the stored procedure sp_updatestats to update statistics more frequently than the default updates.

You administer a Microsoft SQL Server 2014 server. The MSSQLSERVER service uses a domain account named CONTOSO\SQLService.
You plan to configure Instant File Initialization.
You need to ensure that Data File Autogrow operations use Instant File Initialization. What should you do? Choose all that apply.

  • A. Restart the SQL Server Agent Service.
  • B. Disable snapshot isolation.
  • C. Restart the SQL Server Service.
  • D. Add the CONTOSO\SQLService account to the Perform Volume Maintenance Tasks local security policy.
  • E. Add the CONTOSO\SQLService account to the Server Operators fixed server role.
  • F. Enable snapshot isolation.

Answer: CD

Explanation: How To Enable Instant File Initialization References:

You administer a Microsoft SQL Server 2014 instance.
The instance contains a database that supports a retail sales application. The application generates hundreds of transactions per second and is online 24 hours per day and 7 days per week.
You plan to define a backup strategy for the database. You need to ensure that the following requirements are met:
No more than 5 minutes worth of transactions are lost. Data can be recovered by using the minimum amount of administrative effort.
What should you do? Choose all that apply.

  • A. Configure the database to use the SIMPLE recovery model.
  • B. Create a DIFFERENTIAL database backup every 4 hours.
  • C. Create a LOG backup every 5 minutes.
  • D. Configure the database to use the FULL recovery model.
  • E. Create a FULL database backup every 24 hours.
  • F. Create a DIFFERENTIAL database backup every 24 hours.

Answer: BCDE

Explanation: The full recovery model uses log backups to prevent data loss in the broadest range of failure scenarios, and backing and restoring the transaction log (log backups) is required. The advantage of using log backups is that they let you restore a database to any point of time that is contained within a log backup (point-in-time
recovery). You can use a series of log backups to roll a database forward to any point in time that is contained in one of the log backups. Be aware that to minimize your restore time, you can supplement each full backup with a series of differential backups of the same data.

A company has an on-premises Microsoft SQL Server 2017 infrastructure. The storage area network (SAN) that supports the SQL infrastructure has reached maximum capacity.
You need to recommend a solution to reduce on-premises storage use without changing the application. What should you do?

  • A. Configure an Express Route connection to Microsoft Azure.
  • B. Configure a Microsoft Azure Key Vault.
  • C. Configure geo-replication on the SAN.
  • D. Configure SQL Server Stretch Database in Microsoft Azure.

Answer: D

Explanation: Stretch warm and cold transactional data dynamically from SQL Server to Microsoft Azure with SQL Server Stretch Database. Unlike typical cold data storage, your data is always online and available to query. Benefit from the low cost of Azure rather than scaling expensive, on-premises storage.

You administer a Microsoft SQL Server 2014 server. You plan to deploy new features to an application. You need to evaluate existing and potential clustered and non-clustered indexes that will improve
What should you do?

  • A. Query the sys.dm_db_index_usage_stats DMV.
  • B. Query the sys.dm_db_missing_index_details DMV.
  • C. Use the Database Engine Tuning Advisor.
  • D. Query the sys.dm_db_missing_index_columns DMV.

Answer: C

Explanation: The Microsoft Database Engine Tuning Advisor (DTA) analyzes databases and makes recommendations that you can use to optimize query performance. You can use the Database Engine Tuning Advisor to select and create an optimal set of indexes, indexed views, or table partitions without having an expert understanding of the database structure or the internals of SQL Server.

You have Microsoft SQL Server on a Microsoft Azure virtual machine. You create a SQL Server Agent job by using the following statement.
70-765 dumps exhibit
You need to send an email message if the job fails. Which stored procedure should you use?

  • A. msd
  • B. db
  • C. sp_updace_alerc
  • D. msdb.dbo.sp_add_jobstep
  • E. msdb.dbo.sp_add_notification
  • F. msdb.dbo.sp_help_alert

Answer: C

Explanation: To notify an operator of job status through Transact-SQL.
In Object Explorer, connect to an instance of Database Engine. On the Standard bar, click New Query.
-- adds an e-mail notification for the specified alert (Test Alert).
-- This example assumes that Test Alert already exists
-- and that François Ajenstat is a valid operator name. USE msdb ;
EXEC dbo.sp_add_notification
@alert_name = N'Test Alert',
@operator_name = N'François Ajenstat',
@notification_method = 1 ; GO

Your company has several Microsoft Azure SQL Database instances used within an elastic pool. You need to obtain a list of databases in the pool.
How should you complete the commands? To answer, drag the appropriate segments to the correct targets. Each segment may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
NOTE: Each correct selection is worth one point.
70-765 dumps exhibit


    Explanation: References:

    You administer a Microsoft SQL Server 2014 instance that has multiple databases. You have a two-node SQL Server failover cluster. The cluster uses a storage area network (SAN). You discover I/O issues. The SAN is at capacity and additional disks cannot be added.
    You need to reduce the I/O workload on the SAN at a minimal cost. What should you do?

    • A. Move user databases to a local disk.
    • B. Expand the tempdb data and log files
    • C. Modify application code to use table variables
    • D. Move the tempdb files to a local disk

    Answer: D

    Explanation: The use of local disks for TempDB allows us to have more flexibility when configuring for optimal performance. It is a common performance recommendation to create the TempDB database on the fastest storage available. With the capability to utilize local disk for TempDB placement we can easily utilize disks that are larger, have a higher rotational speed or use SSD disks.

    You need to create an Elastic Database job to rebuild indexes across 10 Microsoft Azure SQL databases. Which powershell cmdlet should you run?

    • A. New-AzureSqlJob
    • B. New-AzureWebsiteJob
    • C. New-AzureBatchJob
    • D. New-ScheduledJobOption
    • E. New-JobTrigger

    Answer: A

    Explanation: The New-AzureSqlJob cmdlet, in the ElasticDatabaseJobs module, creates a job definition to be used for subsequent job runs.

    Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution. Determine whether the solution meets stated goals.
    You have a mission-critical application that stores data in a Microsoft SQL Server instance. The application runs several financial reports. The reports use a SQL Server-authenticated login named Reporting_User. All queries that write data to the database use Windows authentication.
    Users report that the queries used to provide data for the financial reports take a long time to complete. The queries consume the majority of CPU and memory resources on the database server. As a result, read-write queries for the application also take a long time to complete.
    You need to improve performance of the application while still allowing the report queries to finish.
    Solution: You configure the Resource Governor to limit the amount of memory, CPU, and IOPS used for the pool of all queries that the Reporting_user login can run concurrently.
    Does the solution meet the goal?

    • A. Yes
    • B. No

    Answer: A

    Explanation: SQL Server Resource Governor is a feature than you can use to manage SQL Server
    workload and system resource consumption. Resource Governor enables you to specify limits on the amount of CPU, physical IO, and memory that incoming application requests can use.

    You have a database named DB1 that uses simple recovery mode.
    Full backups of DB1 are taken daily and DB1 is checked for corruption before each backup. There was no corruption when the last backup was complete.
    You run the sys.columns catalog view and discover corrupt pages.
    You need to recover the database. The solution must minimize data loss. What should you do?

    • C. Run DBCC CHECKDB and specify the REPAIR_ALLOW_DATA_LOSS parameter.
    • D. Run DBCC CHECKDB and specify the REPAIT_REBUILD parameter.

    Answer: B

    Explanation: A page restore is intended for repairing isolated damaged pages. Restoring and recovering a few individual pages might be faster than a file restore, reducing the amount of data that is offline during a restore operation.
    Restores individual pages. Page restore is available only under the full and bulk-logged recovery models. References:

    A company has an on-premises Microsoft SQL Server 2014 environment. The company has a main office in Seattle, and remote offices in Amsterdam and Tokyo. You plan to deploy a Microsoft Azure SQL Database instance to support a new application. You expect to have 100 users from each office.
    In the past, users at remote sites reported issues when they used applications hosted at the Seattle office.
    You need to optimize performance for users running reports while minimizing costs. What should you do?

    • A. Implement an elastic pool.
    • B. Implement a standard database with readable secondaries in Asia and Europe, and then migrate the application.
    • C. Implement replication from an on-premises SQL Server database to the Azure SQL Database instance.
    • D. Deploy a database from the Premium service tier.

    Answer: B

    Explanation: References:

    You manage an on-premises, multi-tier application that has the following configuration:
    Two SQL Server 2012 databases named SQL1 and SQL2
    Two application servers named AppServer1 and AppServer2 that run IIS You plan to move your application to Azure.
    You need to ensure that during an Azure update cycle or a hardware failure, the application remains available.
    Which two deployment configurations should you implement? Each correct answer presents part of the solution.

    • A. Deploy AppServer1 and AppServer2 in a single availability set.
    • B. Deploy all servers in a single availability set.
    • C. Deploy SQL1 and AppServer1 in a single availability set.
    • D. Deploy SQL2 and AppServer2 in a single availability set.
    • E. Deploy SQL1 and SQL2 in a single availability set.

    Answer: AE

    Explanation: You should deploy AppServerl and AppServer2 in a single availability set. You should deploy SQL1 and SQL2 in a single availability set.
    Note: Using availability sets allows you to build in redundancy for your Azure services. By grouping related virtual machines and services (tiers) into an availability set (in this case, deploying both of your databases into an availability set), you ensure that if there is a planned or unplanned outage, your services will remain available. At the most basic level, virtual machines in an availability set are put into a different fault domain and update domain. An update domain allows virtual machines to have updates installed and then the virtual machines are rebooted together.
    If you have two virtual machines in an availability set, each in its own update domain, a rebooting of one server does not bring down all of the servers in a given tier. A fault domain operates in the same manner, so if there is a physical problem with a server, rack, network, or other service, both machines are separated, and services will continue.

    100% Valid and Newest Version 70-765 Questions & Answers shared by Certleader, Get Full Dumps HERE: (New 209 Q&As)