Using a DTU (Database Transaction Unit) based model was always the way to size and configure an Azure SQL Database, that was until recently where Microsoft developed a vCore based model to size cloud-based databases. This article will compare at a high level these very different options and why you would select one model over the other.
As you can see from the following diagram the DTU model offers a pre-configured and pre-defined amount of compute resource. vCore is all about independent scalability where you can look into a specific area such as the CPU core count and memory resources, something that you cannot control at the same granular level when using the DTU based model.
Let’s look into each model with a little more detail.
Database Transaction Unit (DTU)
The DTU represents a blended measure of CPU, memory, reads, and writes. The main idea behind this model is its simplicity of using a preconfigured bundle of compute resources at a fixed cost with easy scalable options.
Service Tiers / Performance Levels
There are three main service tiers that you can select from: Basic, Standard and Premium. Within these tiers there are performance levels. The higher the performance level means higher the number of DTUs assigned to that level in question. The number assigned to the DTU performance level is relative, for example a Premium P11 database with 1750 DTUs provides 175x more DTU compute power than a Standard S0 database with 10 DTUs.
As you can see from the above image you can select from Basic, Standard (S0, S1, S2 and S3) and Premium service tier (P1, P2, P4, P6, P11 and P15) where the DTU starts from 5DTUs to 4000 DTUs.
The key question here is what service tier and performance level should you be using? Selecting the right service tier (and performance level) is so important; misconfiguration is easily done where people just do not really understand what their performance requirements are, hence they struggle to know how to translate that to a service tier that will suit their needs. From this, people tend to select the wrong one and in an attempt to save money opt with a lower tier. This would very likely lead to performance issues. On the flip side selecting a higher performance level when not needed would not be a cost effective option either.
It is important to also be aware of the other differences between the service tiers. For example, if your software requires columnstore indexing then you will have no choice but to use S3 and above (as indicated by the chart below).
A key difference between the Standard and Premium tier that is worth mentioning is the I/O performance. If your workloads are I/O intensive, you will very likely require the Premium tier where it offers superior number of IOPs and less latency. The Basic service tier is not really production ready due to the fact of its 2GB max limit. Both Standard and Premium tiers have included storage up to 250GB with the option to expand to 1024GB if required (at a cost).
Many IT professionals use the DTU calculator as a starting point (see this link for more details https://dtucalculator.azurewebsites.net/), with a key emphasis on testing with real workload analysis.
If you make a mistake with your DTU configuration, then changing the service tier and/or performance level of a database is quite easy. Under the covers Microsoft creates a replica of the original database at the new performance level and then switches connections over to the replica. No data is lost during this process but during the brief moment when the switch over to the replica occurs connections are disabled. Naturally the time for this to happen does depend on the size of your database and how busy the system was when the scale request was initiated.
A simple scale up request for Azure SQL Database is shown below.
ALTER DATABASE [AWSDB] MODIFY (EDITION = 'Premium', MAXSIZE = 500 GB, SERVICE_OBJECTIVE = 'P1'); GO
Here I am simply upgrading to the Premium (P1) performance level whilst stating a 500GB maximum size for the database. Alternatively, you could use the Azure portal to scale up, as shown below.
The maximum amount of DTUs that you can assign to a server is 54,000. This is a seriously high number, however if you have an edge case scenario and require more than this default limit you can submit a support ticket to Microsoft for the subscription in question with an issue type – “quota”.
As you can imagine, the only issue with this model is how this “blended” measure of compute resource maps to an on-premises SQL Server workload? It is not an easy link to make, however this is where vCore based model excels.
What is it?
The vCore based model is new and it offers a totally different approach to sizing your database. It is easier to translate local workloads to a vCore based model because the components are what we are used too.
There are two tiers you can select from, General Purpose and Business Critical. The biggest differences between the two tiers is the storage (in terms of type and throughput) and availability features. With business critical you can have three replicas, read scale capability and zone redundancy (see the table below).
Let’s look at an example, if my local SQL Server database is less than 1TB, runs on 4 logical CPUs, uses approximately 28GB RAM and utilises SSD based storage I would very likely link this to the Business Critical tier when using the vCore model. So you can see how much easier it is to translate this workload to vCore than to DTU. Another benefit of this model is that you have the ability to save money via Azure Hybrid Benefit for SQL Server (https://azure.microsoft.com/en-us/pricing/hybrid-benefit/#sql-ahb-calculator) where it will allow you to use your local SQL Server Enterprise Edition or Standard Edition licenses with active Software Assurance to pay a reduced rate on a vCore-based database.
The following image shows the flexibility possible within a vCore model when creating a database via the Azure portal.
I have highlighted couple of key areas worth mentioning. The blue box shows you what tier you would like to use. Here you have figures behind the facts for the I/O capacity. You can see a big difference between IOPs and latency between the two tiers but at a higher cost. Then you can configure your compute generation as shown by the red box. Gen 4 CPUs are based on Intel E5-2673 v3 (Haswell) 2.4 GHz processors. In Gen 4, 1 vCore = 1 physical CPU whereas Gen 5 logical CPUs are based on Intel E5-2673 v4 (Broadwell) 2.3 GHz processors. In Gen 5, 1 vCore = 1 hyper thread. Then finally the black box where you can configure the core count and data storage capacity independently.
Expect to see the changes from a database engine internals perspective too. For example, I query the database to look for online schedulers and compare the database when it was assigned 2 vCores versus 8 vCores using Gen4 compute generation.
SELECT * FROM sys.dm_os_schedulers
WHERE status = ‘VISIBLE ONLINE’
SELECT * FROM sys.dm_os_schedulers
WHERE status = ‘VISIBLE ONLINE’
It is great to see this flexible option from Microsoft. To confirm what edition and compute generation you are using for your vCore based database you can execute the following TSQL.
SELECT [Tier] = DATABASEPROPERTYEX('bcdb', 'Edition'),
What Should You Choose?
My advice if you are currently using a DTU based model and performance monitoring shows that you are not suffering from performance issues, I would stay with the DTU model. If you are sizing a new workload to move to Azure SQL Database, then you definitely need to consider which route to take. If you prefer paying a fixed amount each month for pre-configured compute resources, then a DTU based model maybe what you need. However, if you want to dig into the underlying resources a little further and scale them independently for optimal performance then vCore might be your best option, especially if you activate Azure Hybrid Benefit for SQL Server.
If your company strategy dictates to you to move from a DTU based database to a vCore one, can you? Absolutely, the only grey area is what service tier / performance level maps to what vCore tier. Some guess work might be required but being the cloud, scaling up or down tiers is relatively easy to do.
By Arun Sirpal (Microsoft MVP), Editorial Contributor
Copyright © 2018 Arun Sirpal: The right of Arun Sirpal to be identified as the author of this work has been asserted in accordance with the Copyright, Designs and Patents Act 1988. All rights reserved. No part of this publication may be reproduced, stored or introduced into a retrieval system, or transmitted, in any form, or by any means (electronic, mechanical, photocopying, recording or otherwise) without the prior written permission of the publisher, Ballard Chalmers. If consent is given, please give credit. Requests to re-print should be sent to [email protected].
About the author
Arun Sirpal is a Microsoft MVP with over 11 years experience with Microsoft Data Platform Technologies – from SQL Server (Physical and Virtual) to Azure. His articles have been published on SQL Server Central and Microsoft TechNet. Arun is a member of Microsoft’s Azure Advisors and SQL Advisors groups, as well as an accomplished Microsoft Certified Solutions Expert on the Data Platform.