Azure SQL Database is Microsoft’s fully managed cloud relational database service in Microsoft Azure. It shares its same code base as traditional SQL Servers but with Microsoft’s Cloud first strategy the newest features of SQL Server are actually released to Azure SQL Database first.
Already proving to be a successful product for Microsoft it will gain even more traction as time goes on. However, I frequently get asked what are the key differences between Azure SQL Database and a locally installed SQL Server from a DBA perspective and whether or not these key differences can be seen as advantages or disadvantages.
Advantages of Azure SQL Database
After reading this section it will make more sense to you as to why Azure SQL Database is a very popular product, it might even surprise some readers in how complete the product is.
This is one of the key benefits of moving to Azure SQL Database. Having high availability already built into the product is a massive advantage which ensures that your database adheres to 99.99% availability service level agreement (SLA) set by Microsoft. To really enhance availability, you can setup active geo-replication for your databases which enables you to configure up to four readable secondary databases in the same or different data centre location (regions). Trying to set this up with your local SQL Servers would be a great technical challenge in itself, think along the lines of Distributed Availability Groups. A recent enhancement by Microsoft led to the idea of failover groups in Azure where with this feature it automatically manages the geo-replication relationship, connectivity and failover at scale.
Setup via the portal is extremely simple and as you can see from the screen shot below with a couple of clicks we have enabled active geo-replication between two data centres in the USA.
Transparency is achieved by the failover group via a dedicated read/write endpoint, so it doesn’t matter which server your database is currently running on you will automatically be routed to it.
Backups and Consistency Checks
As a DBA, configuring and tuning backups is an important activity. No production database should be without backups. All your databases in Azure, regardless of whether you see them as non-production or production, are automatically backed up and at no additional charge. Just think about how much time, effort and disk space you are saving here?
Microsoft utilises full, differential and transaction log backups thus giving you the ability to also perform point in time restores. In terms of schedules, full database backups happen weekly, differential backups every couple of hours and transaction log backups occur between every 5-10 minutes. There is also another type of restore possible which is known as Geo-Restore. This gives you the ability to restore a database to another geographical region in case a geographical disaster occurs.
A key question is often asked at this stage regarding retention. The number of days that a backup is kept for depends on your database service level. If you have a basic database your backups will be kept for seven days, standard and premium databases have a retention of thirty-five days. If this is not enough you have the option of using long-term retention (uses Azure Backup Vault) which can extend it to ten years, naturally at a cost.
The following chart is a great summary of the options especially relating it to ERT (Estimated Recovery Times), RPO (Recovery Point Objective) and RTO (Recovery Time Objective).
No backup and recovery section is complete without mentioning corruption. Corruption is every DBA’s worst nightmare, thankfully it is Microsoft’s responsibility to ensure that all your databases are corruption free. There used to be confusion around this topic as to whose responsibility it really was. Microsoft do give us the ability to execute DBCC CHECKDB but there is actually no need. They are very thorough and use the following techniques to ensure database integrity:
- Backup and restore integrity checks.
- I/O system lost write detection mechanism.
- Automatic Page Repair.
- Data integrity at rest and transit (CHECKSUM).
Incidents that do not impact data or database availability will be corrected without notifying you. If there are issues, then communication is an important factor here and you as the customer will be kept up to date with all the investigations from the engineering team. Again this is a very big advantage and removes another administration task that you as the DBA need not worry about.
We are moving into the realm of self-tuning databases and Microsoft’s interpretation of it called Automatic Tuning. Some may look at this as threat to the DBA role but it is not. Its goal is to do basic tuning for you, hence allowing you to free up some time to spend elsewhere.
Microsoft have developed complex algorithms for this feature, it is not as simple as something like the well-known Database Tuning Advisor. As you can see from the following screen shot there are many components to it.
It is fundamentally built on a process called Learn, Adapt and Verify. There are three areas that automatic tuning operates in, these being:
- FORCE LAST GOOD PLAN – Sometimes known as Automatic Plan Regression Correction, it will know when your query is using a regressed plan and opt to use the last known good one.
- CREATE INDEX that identifies the indexes that may improve performance of your workload, creates the indexes, and verifies that they improve performance of the queries. This only applies to non-clustered indexes.
- DROP INDEX that identifies redundant and duplicate indexes, again only for non-clustered indexes.
Being a DBA you might be nervous about enabling these options because you may be wondering what happens if the system creates (for example) a non-clustered index but causes query performance regression? Automatic Tuning is advanced enough to know this, hence will revert any change that has a detrimental effect on your database. As I mentioned before, it is always adapting and trying to verify its implemented changes for the better. Only with SQL Server 2017 is Automatic Plan Regression Correction available. The index tuning features are not available for any on-premises SQL Server.
There is nothing within an on-premises SQL Server that comes close to the capability of Azure SQL Analytics, where once setup, it will give you the DBA tremendous amount of information at your fingertips.
There are a couple of things you need to do first.
- Setup a Log Analytics workspace.
- Enable diagnostics for your SQL Databases and/or elastic pools.
Once setup it should take approximately 15 minutes to start capturing and rendering back some data. At a high level we get information around query stats and something called intelligent insights.
Intelligent insight is a separate tool but it is integrated into Azure SQL Analytics and it can give you the ability to see real time issues. For example, from the below screen shot you can see that query hash 0x9002FGJR has been picked up by the advanced algorithms as a source of blocking thus causing concurrency issues.
To get this level of information for a locally installed SQL Server you would most likely need to purchase third party software.
The one area where this tool excels in is the information around database wait statistics. If you are a DBA, you will know about server level wait statistics for your local SQL Servers. Within Azure you can capture wait statistics at the DATABASE level and this tool is the best tool to extract that sort of information. If you have Azure Log Analytical query writing skills then some great output is possible. For example:
AzureDiagnostics | where Category=="DatabaseWaitStatistics" | where ResourceId contains '/SUBSCRIPTIONS/2BEED551-A8D2220-422863-922777-AAAECASA745098F3ZB7/RESOURCEGROUPS/RG_AKS/PROVIDERS/MICROSOFT.SQL/SERVERS/SQL01/DATABASES/SQLDB1' | summarize sum(delta_wait_time_ms_d) by bin(start_utc_date_t, 15m), wait_type_s | render barchart
As you can see, this can be a great way to track certain wait types.
Disadvantages of Azure SQL Database
Missing feature – SQL Agent
Over time Microsoft have worked hard to bridge the gap between Azure SQL Database and SQL Server in terms of core features. As of January 2018 there was not a lot missing from Azure, naturally there will still be some features not yet available, such as filestream, but from a DBA perspective there is only really one key difference: there is no SQL agent to allow us to do routine time based tasks. The best way to achieve such things is via Azure Automation.
Just as an example a common task for our local SQL Server is to create a SQL agent job to run re-indexing on the databases; how you would do this in Azure is very different and maybe difficult for new users to Azure SQL Database. To implement this, you would need to be comfortable carrying out the following tasks:
- Create an Automation Account.
- Create a credential.
- Create a PowerShell Runbook which has the code for index rebuilds.
- Create a schedule and link it to the above.
- Configure parameters within the schedule.
- Configure logging level.
As you can appreciate this is significantly more complicated than just creating a single step SQL agent job.
DTUs – Database Transaction Units guarantees a certain level of resources for that database. They are a blended measure of CPU, memory, I/O (data and transaction log I/O). It is all relative too, by that I mean doubling the DTUs by increasing the performance level equates to doubling the set of resource available to a database. As you can imagine when beginning with Azure SQL Database selecting the right service tier can be tricky and if you over provision you are wasting money and if you under provision you could lead to something that I call DTU exhaustion, which it is something that you absolutely do not want as that could lead to higher latency and even incoming connection rejection.
Hopefully after reading this article you understand how feature rich Azure SQL Database is, whilst appreciating the slight differences when comparing it to an on-premises SQL Server. There are definitely more advantages than disadvantages especially when looking at it from a DBA perspective. The key to transitioning to the cloud is to embrace the change and accept that there is a small learning curve.
If you would like help in understanding more about Azure SQL Database versus SQL Server, give us a call on 01342 410 223.
By Arun Sirpal (Microsoft MVP), Editorial Contributor
Copyright © 2018 Arun Sirpal: The right of Arun Sirpal to be identified as the author of this work has been asserted in accordance with the Copyright, Designs and Patents Act 1988. All rights reserved. No part of this publication may be reproduced, stored or introduced into a retrieval system, or transmitted, in any form, or by any means (electronic, mechanical, photocopying, recording or otherwise) without the prior written permission of the publisher, Ballard Chalmers. If consent is given, please give credit. Requests to re-print should be sent to email@example.com.
Arun Sirpal is a Microsoft MVP, specialising in SQL Server architecture, administration, development, configuration and the optimisation of physical and virtual SQL Servers. His articles have been published on SQL Server Central and Microsoft TechNet. In 2015 Arun was mentored by the highly respected long-time SQL Server MVP, Paul Randal, and he also worked with the Microsoft SQL Server product team on testing the vNext adaptive query processing feature. Arun is a member of Microsoft’s Azure Advisors and SQL Advisors groups, as well as an accomplished Microsoft Certified Solutions Expert on the Data Platform.