Joe Sack

Configuring a Dedicated Network for Availability Group Communication

SentryOne Newsletters

The SQLPerformance.com bi-weekly newsletter keeps you up to speed on the most recent blog posts and forum discussions in the SQL Server community.

eNews is a bi-monthly newsletter with fun information about SentryOne, tips to help improve your productivity, and much more.

Subscribe

Featured Author

Paul Randal, CEO of SQLskills, writes about knee-jerk performance tuning, DBCC, and SQL Server internals.

Paul’s Posts

SQL Server 2012 AlwaysOn Availability Groups require a database mirroring endpoint for each SQL Server instance that will be hosting an availability group replica and/or database mirroring session. This SQL Server instance endpoint is then shared by one or more availability group replicas and/or database mirroring sessions and is the mechanism for communication between the primary replica and the associated secondary replicas.

Depending on the data modification workloads on the primary replica, the availability group messaging throughput requirements can be non-trivial. This activity is also sensitive to traffic from concurrent non-availability group activity. If throughput is suffering due to degraded bandwidth and concurrent traffic, you may consider isolating the availability group traffic to its own dedicated network adapter for each SQL Server instance hosting an availability replica. This post will describe this process and also briefly describe what you might expect to see in a degraded throughput scenario.

For this article, I’m using a five node virtual guest Windows Server Failover Cluster (WSFC). Each node in the WSFC has its own stand-alone SQL Server instance using non-shared local storage. Each node also has a separate virtual network adapter for public communication, a virtual network adapter for WSFC communication, and a virtual network adapter that we’ll dedicate to availability group communication. For the purposes of this post, we’ll focus on the information needed for the availability group dedicated network adapters on each node:

WSFC Node Name Availability Group NIC TCP/IPv4 Addresses
SQL2K12-SVR1

192.168.20.31
SQL2K12-SVR2

192.168.20.32
SQL2K12-SVR3

192.168.20.33
SQL2K12-SVR4

192.168.20.34
SQL2K12-SVR5

192.168.20.35

Setting up an availability group using a dedicated NIC is almost identical to a shared NIC process, only in order to “bind” the availability group to a specific NIC, I first have to designate the LISTENER_IP argument in the CREATE ENDPOINT command, using the aforementioned IP addresses for my dedicated NICs. Below shows the creation of each endpoint across the five WSFC nodes:

:CONNECT SQL2K12-SVR1

USE [master];
GO

CREATE ENDPOINT [Hadr_endpoint] 
    AS TCP (LISTENER_PORT = 5022, LISTENER_IP = (192.168.20.31))
    FOR DATA_MIRRORING (ROLE = ALL, ENCRYPTION = REQUIRED ALGORITHM AES);
GO

IF (SELECT state FROM sys.endpoints WHERE name = N'Hadr_endpoint') <> 0
BEGIN
    ALTER ENDPOINT [Hadr_endpoint] STATE = STARTED;
END
GO

USE [master];
GO

GRANT CONNECT ON ENDPOINT::[Hadr_endpoint] TO [SQLSKILLSDEMOS\SQLServiceAcct];
GO

:CONNECT SQL2K12-SVR2

-- ...repeat for other 4 nodes...

After creating these endpoints associated with the dedicated NIC, the rest of my steps in setting up the availability group topology are no different than in a shared NIC scenario.

After creating my availability group, if I start driving data modification load against the primary replica availability databases, I can quickly see that the availability group communication traffic is flowing on the dedicated NIC using Task Manager on the networking tab (the first section is the throughput for the dedicated availability group NIC):

Network traffic going over dedicated NIC

And I can also track the stats using various performance counters. In the below image, the Inetl[R] PRO_1000 MT Network Connection _2 is my dedicated availability group NIC and has the majority of NIC traffic compared to the two other NICs:

Perfmon Counters

Now having a dedicated NIC for availability group traffic can be a way to isolate activity and theoretically improve performance, but if your dedicated NIC has insufficient bandwidth, as you might expect performance will suffer and the health of the availability group topology will degrade.

For example, I changed the dedicated availability group NIC on the primary replica to a 28.8 Kbps outgoing transfer bandwidth to see what would happen. Needless to say, it wasn’t good. The availability group NIC throughput dropped significantly:

Effect of dropping bandwidth

Within a few seconds, the health of the various replicas degraded, with a couple of the replicas moving to a “not synchronizing” state:

AG Dashboard after bandwidth degradation

I increased the dedicated NIC on the primary replica to 64 Kbps and after a few seconds there was an initial catch-up spike as well:

Effect of lifting bandwidth restriction

While things improved, I did witness periodic disconnects and health warnings at this lower NIC throughput setting:

Disconnects during improvement

What about the associated wait statistics on the primary replica?

When there was plenty of bandwidth on the dedicated NIC and all availability replicas were in a healthy state, I saw the following distribution during my data loads over a 2 minute period:

Wait stats when healthy

HADR_WORK_QUEUE represents an expected background worker thread waiting for new work. HADR_LOGCAPTURE_WAIT represents another expected wait for new log records to become available and according to Books Online, is expected if the log scan is caught up or is reading from disk.

When I reduced the throughput of the NIC enough in order to get the availability group to an unhealthy state, the wait type distribution was as follows:

Wait stats during degradation

We now see a new top wait type, HADR_NOTIFICATION_DEQUEUE. This is one of those “internal use only” wait types as defined by Books Online, representing a background task that processes WSFC notifications. What’s interesting is that this wait type doesn’t point directly to an issue, and yet the tests show this wait type rise to the top in association with degraded availability group messaging throughput.

So the bottom line is isolating your availability group activity to a dedicated NIC can be beneficial if you’re providing a network throughput with sufficient bandwidth. However if you can’t guarantee good bandwidth even using a dedicated network, the health of your availability group topology will suffer.