skip to Main Content

Azure Hyperscale

Hyperscale is a service tier within Azure SQL Database. It is different from the General Purpose and Business Critical tier in the fact that it can scale up to 100 TB and enters the HTAP (Hybrid Transactional Analytical Processing) market.

It has a unique architecture. Microsoft rearchitected Hyperscale for the cloud, and this architecture includes a multi-layer caching system that can help with both speed and scale. This is based on a distributed architecture as shown below.

Hyperscale for the cloud

Fast performance is possible due to Compute nodes having SSD-based caches (labelled RBPEX – Resilient Buffer Pool Extension.

The page servers are systems representing a scaled-out storage engine. Each page server is responsible for a subset of the pages in the database where each page server controls either up to 128 GB or up to 1 TB of data. Long-term storage of data pages is kept in Azure Storage for additional reliability. The log service accepts log records from the primary compute replica, persists them in a durable cache and then forwards the log records to the other replicas.

So, you can clearly see from the diagram above the separation of the data and compute tiers, a common approach with cloud-native databases that require performance for large data sizes.

What are the main benefits of Azure Hyperscale?

  • Built-in SLA of 99.95% when leveraging at least one replica.
  • Support up to 100TB database size.
  • Ever-Green SQL Server functionality.
  • Read scale-out functionally is possible with the read-only endpoint.
  • Easy scale up/down of compute by increasing/decreasing vCore counts.
  • Microsoft provides instantaneous backups leveraging file snapshot technology.
  • Fast database restores are possible from the file snapshot technology.
  • Geo-restore capability for Disaster Recovery purposes.
  • No SQL patching.
  • No operating system (OS) patching.

The other great thing in an Azure native database is it integrates well with Azure AD, Azure SQL Defender and Azure Private Link.

Database Security in Azure Hyperscale

This should be set at a server scope and NOT just the database level. This is set within the security blade of the Azure portal.

security blade of the Azure portal

With this, a valid storage account (which is linked to when you enable auditing) within the same resource group should be created to hold the audit logs. The storage account should be encrypted with SSL (Secure Sockets Layer) also configured. As a side note, this storage account should never be accessible over the internet. This enables two important pieces of functionality, advanced threat detection and vulnerability assessments. The following should apply:

For advanced threat detection, all threat types should be selected and the relevant administrators’ email address entered for real-time alerts. The server will then be protected against:

  • SQL injection.
  • SQL injection vulnerability.
  • Data exfiltration.
  • Unsafe action.
  • Brute force.
  • Anomalous client login.

For vulnerability assessments, there is a recurring scanning service that provides visibility into your security state, this should be configured to run on a weekly basis as a minimum.

Azure Private Link

Ideally, your solutions should have no public access enabled and should be configured with a private link established. This is usually true for real-world production systems and is recommended during any testing implementations.

The goal is to connect to Azure Hyperscale via a private endpoint which is a private IP address within your subnet range. Hence all relevant virtual machines and connecting applications that reside in Azure should be built in the same Virtual Network (VNET) as the database server itself. If servers from other VNETs within the same region need access then a peered network will be needed.

Azure Active Directory (AD)

This should always be enabled and the relevant AD group assigned as the Active Directory Admin for Azure-based SQL Servers. The assumption here is that the Active Directory has been associated with the relevant subscription, where then the Azure admin account can be configured.

The high-level steps to configure this are shown below:

  1. Create and populate Azure AD.
  2. Optional: Associate or change the Active Directory that is currently associated with your Azure Subscription.
  3. Create an Azure Active Directory administrator.
  4. Configure your client computers.
  5. Create contained database users in your database mapped to Azure AD identities.
  6. Connect to your database by using Azure AD identities.

Azure Hyperscale Backups

From an operational standpoint, Microsoft provides the backup functionality (as mentioned in the benefits section). There are no traditional full, differential, and log backups for Hyperscale databases. Instead, there are regular storage snapshots of data files.

Microsoft adheres to an RPO (Recovery Point Objective) of 0 min. Most restore operations complete within 60 minutes regardless of database size (Recovery Time Objective). However, please note that restore time may be longer for larger databases, or if the database has experienced significant write activity before and up to the restore point in time.

With Hyperscale we can change the retention period per database for each active database in the 1-35-day range. The backup storage amount remains equal to the database size which is provided at no extra charge. Additional backup storage consumption will be charged at a rate of £0.172/GB/month.


With Hyperscale, scale up is possible up to a maximum of 80 vCores if based on Generation 5 hardware. Generation 5 hardware is based on Intel® E5-2673 v4 (Broadwell) 2.3-GHz, Intel® SP-8160 (Skylake) and Intel® 8272CL (Cascade Lake) 2.5 GHz processors. From a memory perspective, 5.1GB RAM per vCore is allocated.

When scaling up the compute there will be a slight delay, thus it should be executed during a quiet time. Scaling out is possible from a read perspective to secondary replicas. These replicas are all identical and use the same Service Level Objective as the primary replica. If more than one secondary replica is present, the workload is distributed across all available secondaries. Each secondary replica is updated independently, this is known as read scale-out.

Read Replicas on Azure Hyperscale

The read scale-out feature allows you to offload read-only workloads using the compute capacity of one of the read-only replicas. Instead of running them on the read-write replica. This way, some read-only workloads can be isolated from the read-write workloads and will not affect performance. The feature is intended for applications that include logically separated read-only workloads, such as heavy user report-based queries.

Read Replicas on Azure HyperscaleThis is set up by directing applications and users via the application intent = read-only setting. Such as


<em><strong>;Database=;ApplicationIntent=ReadOnly;User ID=;Password=;Trusted_Connection=False; Encrypt=True;</strong></em><strong>

If the workload needs to read committed data immediately, then the code should run on the primary replica only. If some latency is acceptable, then leveraging the read-only endpoint should be used so less contention occurs on the primary replica.

High Availability

Another benefit of having one replica is that in an unplanned failover (i.e. a hardware failure on the primary replica), the system uses the secondary replica as a failover target. Hence, why the SLA jumps from 99.9% (without replicas) to 99.95% with one replica. The more replicas the solution has the higher the availability SLA is, this is confirmed by the following diagram.

Provisioned replicas

Disaster Recovery

Hyperscale technology does support failover groups (public preview only). By default, backups by Microsoft are geo-redundant aware. So, if a catastrophic failure occurs in the primary Data Centre (for example West Europe) the administrator will have the ability to create a logical Azure SQL Server in the paired region (in this example, North Europe) with the ability to geo-restore the hyperscale database to that paired region. This will take quite some time but generally, Microsoft try and optimise this process by parallelising data files copies.

Building an Azure Hyperscale Database

Being an Azure native database, it is quite simple to build a Hyperscale database using the Azure portal, TSQL or Azure CLI (Command-Line Interface). For this article, I’ll show the quick process using the Azure portal.

Select create database option.

Create database option

Then you need to navigate to the configure section to get to the Hyperscale tier.

Hyperscale tier

Then you have options within Hyperscale where you can select the hardware type, cores required and replica requirement.

service and compute tier

Compute hardware

Once you are happy with the compute details, you should complete the rest of the wizard around networking and security and then click create.

Azure Hyperscale Networking

Review and create

This may take some time.

Azure Hyperscale deployment

Once done, you connect to the database as you would any other via tools like SSMS (SQL Server Management Studio). I connect and issue the query below against my master database and you can see I have a Hyperscale tier on generation 5 hardware.



FROM sys.databases d

JOIN sys.database_service_objectives slo

ON d.database_id = slo.database_id;


Hyperscale tier on generation 5 hardware


Hopefully, after reading this overview article on Azure Hyperscale, you can see where it fits within the Azure SQL family. It is a great choice when you need to go beyond the 16TB limitation with the classic tiers, (Business Critical and General Purpose) coupled with the need to balance performance between OLTP (Online Transaction Processing) and heavy read-based workloads.

Azure Hyperscale References

Post Terms: Azure | Azure SQL | Azure SQL Hyperscale | Hyperscale

About the Author

Arun Sirpal, writing here as a freelance blogger, is a four-time former Data Platform MVP, specialising in Microsoft Azure and Database technology. A frequent writer, his articles have been published on SQL Server Central and Microsoft TechNet alongside his own personal website. During 2017/2018 he worked with the Microsoft SQL Server product team on testing the vNext adaptive query processing feature and other Azure product groups. Arun is a member of Microsoft’s Azure Advisors and SQL Advisors groups and frequently talks about Azure SQL Database.

Education, Membership & Awards

Arun graduated from Aston University in 2007 with a BSc (Hon) in Computer Science and Business. He went on to work as an SQL Analyst, SQL DBA and later as a Senior SQL DBA, DBA Team Lead and now Cloud Solution Architect. Alongside his professional progress, Arun became a member of the Professional Association for SQL Server. He became a Microsoft Most Valued Professional (MVP) in November 2017 and has since won it for the fourth time.

You can find Arun online at:

Back To Top
Contact us for a chat