skip to Main Content

Azure SQL Database DTU Versus vCore

Looking for more help? We offer both software consultancy and development to help you with any Microsoft custom enterprise application that needs development or troubleshooting.

Get in touch

Updated 22nd October 2020 By Arun Sirpal (Microsoft MVP), Editorial Contributor 

Using a DTU (Database Transaction Unit) based model was always the way to size and configure an Azure SQL Database, that was until recently where Microsoft developed a vCore based model to size cloud-based databases. This article will compare at a high level these very different options and why you would select one model over the other—DTU versus vCore.

As you can see from the following diagram the DTU model offers a pre-configured and pre-defined amount of compute resource. vCore is all about independent scalability where you can look into a specific area such as the CPU core count and memory resources, something that you cannot control at the same granular level when using the DTU based model.

Azure SQL Database DTU vs vCore 1| DTU model

Let’s look into each model with a little more detail.

Database Transaction Unit (DTU)

The DTU represents a blended measure of CPU, memory, reads, and writes. The main idea behind this model is its simplicity of using a preconfigured bundle of compute resources at a fixed cost with easy scalable options.

Service Tiers / Performance Levels

There are three main service tiers that you can select from: Basic, Standard and Premium. Within these tiers, there are performance levels. The higher the performance level means higher the number of DTUs assigned to that level in question. The number assigned to the DTU performance level is relative, for example, a Premium P11 database with 1750 DTUs provides 175x more DTU compute power than a Standard S0 database with 10 DTUs.

Azure SQL Database DTU vs vCore | Service tiers

As you can see from the above image you can select from Basic, Standard (S0, S1, S2 and S3) and Premium service tier (P1, P2, P4, P6, P11 and P15) where the DTU starts from 5DTUs to 4000 DTUs.

The key question here is what service tier and performance level should you be using? Selecting the right service tier (and performance level) is so important; misconfiguration is easily done where people just do not really understand what their performance requirements are, hence they struggle to know how to translate that to a service tier that will suit their needs. From this, people tend to select the wrong one and in an attempt to save money opt with a lower tier. This would very likely lead to performance issues. On the flip side selecting a higher performance level when not needed would not be a cost-effective option either.

It is important to also be aware of the other differences between the service tiers. For example, if your software requires columnstore indexing then you will have no choice but to use S3 and above (as indicated by the chart below).

Azure SQL Database DTU vs vCore | Service tiers differences

A key difference between the Standard and Premium tier that is worth mentioning is the I/O performance. If your workloads are I/O intensive, you will very likely require the Premium tier where it offers a superior number of IOPs and less latency. The Basic service tier is not really production-ready due to the fact of its 2GB max limit. Both Standard and Premium tiers have included storage up to 250GB with the option to expand to 1024GB if required (at a cost).

Sizing Workloads?

Many IT professionals use the DTU calculator as a starting point (see this link for more details https://dtucalculator.azurewebsites.net/), with a key emphasis on testing with real workload analysis.

If you make a mistake with your DTU configuration, then changing the service tier and/or performance level of a database is quite easy. Under the covers, Microsoft creates a replica of the original database at the new performance level and then switches connections over to the replica. No data is lost during this process but during the brief moment when the switch over to the replica occurs connections are disabled. Naturally, the time for this to happen does depend on the size of your database and how busy the system was when the scale request was initiated.

A simple scale-up request for Azure SQL Database is shown below.

ALTER DATABASE [AWSDB] MODIFY (EDITION = 'Premium', MAXSIZE = 500 GB, SERVICE_OBJECTIVE = 'P1');
GO

Here I am simply upgrading to the Premium (P1) performance level whilst stating a 500GB maximum size for the database. Alternatively, you could use the Azure portal to scale up, as shown below.

Azure SQL Database DTU vs vCore | Azure portal scale up

The maximum amount of DTUs that you can assign to a server is 54,000. This is a seriously high number, however if you have an edge case scenario and require more than this default limit you can submit a support ticket to Microsoft for the subscription in question with an issue type – “quota”.

As you can imagine, the only issue with this model is how this “blended” measure of compute resource maps to an on-premises SQL Server workload? It is not an easy link to make, however this is where vCore based model excels.

PowerShell Sizing

We previously mentioned a technique leveraging the DTU calculator to size your workloads nowadays it is common practice to leverage PowerShell scripts that you get from installing DMA – Database Migration Assistant tool. (https://www.microsoft.com/en-us/download/details.aspx?id=53595)

At a high level what we are doing here is running scripts that collect metrics against your on-premises database workloads and use that output to then run a recommendation on the compute tier that would best suit the workload if moving to Azure SQL Database.

Once you have downloaded and installed the tools (usually within your c:\ folder) you issue a similar command shown below.

.\SkuRecommendationDataCollectionScript.ps1 -ComputerName Arunsql10 -OutputFilePath c:\localcounters.csv -CollectionTimeInSeconds 1200 -DbConnectionString ‘Server=mysqlserversql1;Initial Catalog=master;Integrated Security=SSPI;’

Once the script finishes (based on a duration you set) it will produce a CSV file. This file will then need to be consumed as input for the next script.

.\DmaCmd.exe /Action=SkuRecommendation /SkuRecommendationInputDataFilePath=”c:\localcounters.csv” /SkuRecommendationOutputResultsFilePath=”C:\localprices.html” /SkuRecommendationTsvOutputResultsFilePath=”C:\localprices.tsv” /SkuRecommendationPreventPriceRefresh=true

The output of this then gives you some recommendations (based on HTML and TSV file) which should be implemented as a starting point and monitored once you have migrated to the cloud.

An advantage of using this tool and approach is that it considers both the DTU and vCore model hence gives you some flexibility.

 

vCore

What is it?

The vCore based model is new and it offers a totally different approach to sizing your database. It is easier to translate local workloads to a vCore based model because the components are what we are used too.

Tiers

There are two tiers you can select from, General Purpose and Business Critical. The biggest differences between the two tiers are the storage (in terms of type and throughput) and availability features. With business critical you can have three replicas, read scale capability and zone redundancy (see the table below).

Azure SQL Database DTU vs vCore | vCore tiers

Let’s look at an example, if my local SQL Server database is less than 1TB, runs on 4 logical CPUs, uses approximately 28GB RAM and utilises SSD based storage I would very likely link this to the Business Critical tier when using the vCore model. So you can see how much easier it is to translate this workload to vCore than to DTU. Another benefit of this model is that you have the ability to save money via Azure Hybrid Benefit for SQL Server (https://azure.microsoft.com/en-us/pricing/hybrid-benefit/#sql-ahb-calculator) where it will allow you to use your local SQL Server Enterprise Edition or Standard Edition licenses with active Software Assurance to pay a reduced rate on a vCore-based database.

The following image shows the flexibility possible within a vCore model when creating a database via the Azure portal.

Azure SQL Database DTU vs vCore | vCore flexibility model

I have highlighted a couple of key areas worth mentioning. The blue box shows you what tier you would like to use. Here you have figures behind the facts for the I/O capacity. You can see a big difference between IOPs and latency between the two tiers but at a higher cost. Then you can configure your compute generation as shown by the red box. Gen 4 CPUs are based on Intel E5-2673 v3 (Haswell) 2.4 GHz processors. In Gen 4, 1 vCore = 1 physical CPU whereas Gen 5 logical CPUs are based on Intel E5-2673 v4 (Broadwell) 2.3 GHz processors. In Gen 5, 1 vCore = 1 hyper thread. Then finally the black box where you can configure the core count and data storage capacity independently.

Expect to see the changes from a database engine internals perspective too. For example, I query the database to look for online schedulers and compare the database when it was assigned 2 vCores versus 8 vCores using Gen4 compute generation.

2 vCores

SELECT * FROM sys.dm_os_schedulers
WHERE status = ‘VISIBLE ONLINE’

Azure SQL Database DTU vs vCore | 2 vCores

8 vCores

SELECT * FROM sys.dm_os_schedulers
WHERE status = ‘VISIBLE ONLINE’

Azure SQL Database DTU vs vCore | 8 vCores

It is great to see this flexible option from Microsoft. To confirm what edition and compute generation you are using for your vCore based database you can execute the following TSQL.

SELECT [Tier] = DATABASEPROPERTYEX('bcdb', 'Edition'),
       [ComputeGen] = DATABASEPROPERTYEX('bcdb', 'ServiceObjective')

Azure SQL Database DTU vs vCore | Confirm edition

Azure SQL Database Serverless

This is one of the newer compute tiers available for Azure SQL Database and is only available for vCore type and is best used for those single databases that are ever-changing with unpredictable patterns. This is a totally different concept from “provisioned” compute. Billing is different with this tier because you are billed per second rather than per hour which means that pricing can become more granular. This makes much more sense with this compute type because we can leverage auto-pause and auto-resume capability.

When you configure this compute tier (shown above) you are giving your database the flexibility to “burst” between the min/max cores that you decide, hence why this would be suitable for those unpredictable workloads. Also, you can see the impact on the number of cores you select dictates the MIN/MAX Memory available for your workloads so careful consideration is required.

The idea of auto-pause is based on a time interval. As you can see above 6 hours is set, meaning after 6 hours of inactivity it will auto-pause, thus saving on compute costs. Once activity has been triggered the compute will auto-resume. Please note the latency to auto-resume and auto-pause a serverless database is generally order of 1 minute to auto-resume and 1-10 minutes to auto-pause, this may or may not be acceptable for your workloads and it will be another important factor for you to consider.

It is important to understand what can cause auto-resume. Please see the following chart from Microsoft.

Azure SQL Database – Hyperscale

Another potential compute offering within the vCore model is the use of Hyperscale. This provides highly scalable storage and compute performance which is built on a distributed function architecture. This architecture has many benefits, meaning it can support up to 100TB database sizes, provide instant database backups, fast restores and online scaling up/down. You can run many types of workloads with this tier such as OLTP to pure analytics, but it is primarily optimized for OLTP and hybrid transaction and analytical processing (HTAP) workloads.

Creating a hyperscale database is no different from any other type. It can be done via the Azure Portal (or TSQL).

Within the configure options when creating a database you will see hyperscale.

It is based on Generation 5 hardware and you can scale up to 80 vCores if needed and build up to 4 secondary replicas which is great for read scale out based solutions.

T-SQL equivalent for creating a hyperscale database is shown below:
CREATE DATABASE [HyperscaleDB1] (EDITION = ‘Hyperscale’, SERVICE_OBJECTIVE = ‘HS_Gen5_4’);
GO
There are some limitations you need to be aware of such as:
• It is currently a one-way operation, that is if you move to hyperscale you cannot go back to a different compute tier such as standard DTU tier.
• Currently no support for geo-replication.
• No PolyBase support.
• No support for R or Python.

 

What Should You Choose?

My advice if you are currently using a DTU based model and performance monitoring shows that you are not suffering from performance issues, I would stay with the DTU model. If you are sizing a new workload to move to Azure SQL Database, then you definitely need to consider which route to take. If you prefer paying a fixed amount each month for pre-configured compute resources, then a DTU based model maybe what you need. However, if you want to dig into the underlying resources a little further and scale them independently for optimal performance then vCore might be your best option, especially if you activate Azure Hybrid Benefit for SQL Server.

If your company strategy dictates to you to move from a DTU based database to a vCore one, can you? Absolutely, the only grey area is what service tier/performance level maps to what vCore tier. Some guesswork might be required but being the cloud, scaling up or down tiers is relatively easy to do.

References

https://docs.microsoft.com/en-gb/azure/sql-database/

https://blobeater.blog/category/azure/


Post Terms: Azure | Azure SQL Database | Cloud | Microsoft Cloud

About the Author

Arun Sirpal, writing here as a freelance blogger, is a four-time former Data Platform MVP, specialising in Microsoft Azure and Database technology. A frequent writer, his articles have been published on SQL Server Central and Microsoft TechNet alongside his own personal website. During 2017/2018 he worked with the Microsoft SQL Server product team on testing the vNext adaptive query processing feature and other Azure product groups. Arun is a member of Microsoft’s Azure Advisors and SQL Advisors groups and frequently talks about Azure SQL Database.

Education, Membership & Awards

Arun graduated from Aston University in 2007 with a BSc (Hon) in Computer Science and Business. He went on to work as an SQL Analyst, SQL DBA and later as a Senior SQL DBA, DBA Team Lead and now Cloud Solution Architect. Alongside his professional progress, Arun became a member of the Professional Association for SQL Server. He became a Microsoft Most Valued Professional (MVP) in November 2017 and has since won it for the fourth time.

You can find Arun online at:

Back To Top
Contact us for a chat