NIC DNS Registration and Exchange Servers


Symptom

I recently worked with a customer who had introduced an Exchange 2013 Server into an existing Exchange 2007 environment. The issue was the 2013 Server was unable to send email anywhere; neither externally or to other Exchange Servers. If you executed the below command to view the status of the transport queues you received the below output:

Get-Queue <Queue Identity> | FL

NIC

Specifically, the error message you would receive is “4.4.0 DNS query failed. The error was: DNS query failed with error ErrorRetry”

This is a fairly common error indicating there is an issue contacting the DNS Server or Servers that Exchange is configured to use. ReferenceA ReferenceB

Resolution

However, in this case the issue was not obvious, unless you had already seen this issue before or knew a little bit about the health checks Exchange uses to ensure it’s healthy.

I remembered seeing a similar issue on a Reddit thread awhile back, which caused me to search and find this Microsoft KB article titled “DNS query failed” error when an email message is stuck in the Draft folder in an Exchange Server 2013 environment”.

This was the resolution in my scenario as well. To resolve the issue, I simply had to re-check the “Register this connection’s addresses in DNS” option on the IPv4> Properties>Advanced>DNS tab on the primary NIC used for Active Directory communications. While you can uncheck this box on secondary NICs (such as for iSCSI, Replication, Backup, etc.), it should always remain checked on the MAPI/Primary NIC. I’ve also seen issues where having this unchecked on a 2013/2016 DAG node will result in Managed Availability-triggered database failovers.

Advertisement

Failures when proxying HTTP requests from Exchange 2013 to a previous Exchange version


Overview

I’ve seen this issue a few times over the past months & most recently this past week with a customer. Luckily there’s a fairly simple fix to the issue published by Microsoft, but realizing not everyone remembers every Microsoft KB that gets released I thought I’d shine a spotlight on this one.

Scenario

As part of the migration process, when customers move their namespace from either Exchange 2007 or 2010 to 2013, HTTP connections start proxying through 2013 to the legacy Exchange Servers and some users will experience failures. The potential affected workloads are:
AutoDiscover
Exchange Web Services (Free/Busy)
ActiveSync
OWA
Outlook

Test or new mailboxes may not be affected.

Resolution

The cause of this is the age old problem of Token Bloat. Users being members of too many groups or having large tokens.

The fix is to implement the changes in the below Microsoft KB article

“HTTP 400 Bad Request” error when proxying HTTP requests from Exchange Server 2013 to a previous version of Exchange Server
https://support.microsoft.com/en-us/kb/2988444

The interesting thing in this scenario is that the issue was not experienced in the legacy version of Exchange & even if you look at the tokens themselves, they may not seem overly large. It seems that the process of proxying Exchange traffic is much more sensitive to this issue. Also, in a recent case that went to Microsoft, even if you increase the recommended values to a value higher than your current headers it may not have the desired effect. In our case we had to set the MaxRequestBytes & MaxFieldLength values to exactly match the values in the Microsoft KB (65536 (Decimal)).

For further reading, please see the below articles.

Complimentary Articles

“HTTP 400 – Bad Request (Request Header too long)” error in Internet Information Services (IIS)
https://support.microsoft.com/en-us/kb/2020943

How to use Group Policy to add the MaxTokenSize registry entry to multiple computers
https://support.microsoft.com/en-us/kb/938118

 

Additional Note

As an FYI, another issue I commonly see when namespaces get transitioned to 2013 is authentication popups when connections proxy to the legacy Exchange Servers. Please see the below KB for that issue

Outlook Anywhere users prompted for credentials when they try to connect to Exchange Server 2013
https://support.microsoft.com/en-us/kb/2990117

I also blogged about it here
https://exchangemaster.wordpress.com/2014/10/30/exchange-2010-outlook-anywhere-users-receiving-prompts-when-proxied-through-exchange-2013/

Exchange 2010 Outlook Anywhere users receiving prompts when proxied through Exchange 2013


Background

I was working with a customer who had Exchange 2010 & were in the process of migrating to Exchange 2013. As part of their migration process they pointed their Exchange 2010 Outlook Anywhere namespace (let’s call it mail.contoso.com) to Exchange 2013 in DNS. At this point all of their Outlook Anywhere clients should have been connecting to Exchange 2013 & then been proxied to Exchange 2010. While this was somewhat working, they also immediately noticed users were randomly being prompted for credentials, resulting in a negative user experience.

Sometimes the prompts would be when connecting to Public Folders while other times mail or directory connections from Outlook to Exchange.

Resolution

When I was approached with this issue/symptom it sounded familiar. After a search through my OneNote I realized I previously had a discussion with some people I know from Microsoft Support regarding this issue. Turns out this issue was recently addressed via http://support2.microsoft.com/kb/2990117 “Outlook Anywhere users prompted for credentials when they try to connect to Exchange Server 2013”.

This is actually an IIS issue with Server 2008 R2 (the operating system Exchange 2010 was installed on) that’s resolved by a hotfix. After installing the hotfix & rebooting the issue was resolved & their users no longer received the prompts.

 

 

Legacy Public Folder remnants in Exchange 2013 cause “The Microsoft Exchange Administrator has made a change…” prompt


Background

I usually refrain from writing posts on issues where I haven’t been able to fully reproduce them in my lab but enough people seem to be having this issue that it would be good to spread the word should another person find themselves afflicted by it. I’ve seen this issue happen in two different environments & then found out via the forums that several other people have run into it as well.

Issue

I was working with a customer who migrated from Exchange 2007 to Exchange 2013. After decommissioning the 2007 servers, all the Exchange 2013 mailboxes started getting the infamous “The Microsoft Exchange Administrator has made a change that requires you quit and restart Outlook” prompt.

This seemed odd because Exchange 2013 was supposed to all but eliminate those prompts. While it did eliminate the prompts when the RPC Endpoint (Server Name field in Outlook) changed, there are still other scenarios that could result in this prompt (please see reference links at bottom of post for a detailed history). One such thing relates to the Public Folder Hierarchy.

In this customer’s scenario, I determined that the “PublicFolderDatabase” attribute on every Exchange 2013 Mailbox Database was set to a value resembling the screenshot below:

Admin

In this case, the decommissioning of Exchange 2007 & its Legacy Public Folders was not done correctly (same issue probably would have occurred if it were 2010). The Public Folder Database was showing up as a deleted object in AD. So the result was that the Outlook clients were trying to access Public Folder information but were reacting in a way that resulted in the frequent prompt to restart Outlook.

The resolution in this case was to drill down to the properties of the Mailbox Database in ADSIEDIT & set the value of “msExchHomePublicMDB” to be blank. Afterwards, a restart of the Information Store Service resolved the issue.

Additional Info

Not long after this issue, I was contacted by a Consultant I know who encountered the exact same issue. After an improperly performed Exchange 2007 migration, the Exchange 2013 mailboxes were getting prompted to restart Outlook. That environment also had Mailbox Databases that were pointed to a deleted object for their default Public Folder Database. Clearing the value & restarting the Information Store Service also resolved their issue.

After hearing this I went online to see if any others were encountering this issue. I found the below two forum posts

Reference A

Reference B

I then tried to reproduce this in my own environment but could not. Manually deleting the Exchange 2007 Server object from AD as well as manually deleting the Public Folder Database object did leave the 2013 Mailbox Databases pointing to the ghosted objects, but I did not receive the prompts. It appears there’s a particular chain of events that causes this issue but even though I could not recreate them in my lab, it certainly seems like people are running into the issue in the wild. If you start receiving these prompts then I suggest looking to make sure your attributes are not also pointed to ghosted objects.

Note: I was also informed that you could leave yourself in this scenario by incorrectly performing a migration from Legacy Public Folders to Modern Public Folders.

During the migration, you run the “Set-Mailbox <PublicFolderMailboxName> –PublicFolder –IsExcludedFromServingHierarchy:$True” command to prevent the Modern Public Folders from serving the Hierarchy requests while you’re moving data over; when you eventually complete the migration you should run “Set-Mailbox <PublicFolderMailboxName> –PublicFolder –IsExcludedFromServingHierarchy:$False” to allow it to serve the Hierarchy requests. If you do not run this command then you may receive the same prompts.

Additional References

http://blogs.msdn.com/b/aljackie/archive/2013/11/14/outlook-and-rpc-end-point-the-microsoft-exchange-administrator-has-made-a-change-that-requires-you-quit-and-restart-outlook.aspx

http://blogs.technet.com/b/exchange/archive/2011/01/24/obviating-outlook-client-restarts-after-mailbox-moves.aspx

http://blogs.technet.com/b/exchange/archive/2012/05/30/rpc-client-access-cross-site-connectivity-changes.aspx

Mails Stuck In The Draft Folder


Today, I came cross another interesting mail flow issue, where all mails stuck in Draft folders for all users when they are using OWA. You can imagine that mail flow was broken, that non of users can send any mails internally or externally.

Customer has troubleshot it for over 12 hours, and has gone as far as re-install operating system and Exchange 2013 server with /RecoverServer switch, but issue remains.

When I started looking at the issue, I went through series of basic transport troubleshooting steps for Exchange 2013 multirole server, such as checking all transport related services, possible back pressure issue, and state of all server components. Of course, there is nothing wrong with them.

Running out of ideas, I checked settings of send connector, just to make sure there is nothing out of ordinary. I see this in Send Connector properties,

Image

 

There are not many reasons for any Exchange server to use External DNS server for lookups out there. For this environment, it certainly is not needed as well.

I unchecked the box, and restart transport service to speed up the process, but issue remans.

I then run get-TransportService | fl *dns*, to make sure that we don’t have any external DNS settings configured.

   Image

  Ah ha! External DNS server setting is set. I run few tests with nslookup, the DNS server did not respond to any queries. So that’s probably the reason why that mails are not flowing.

  To remove it, you have to run Set-TransportService -ExternalDNSAdapterEnabled $true -ExternalDNSServers $null.

  After restarting the transport service, all mails in the Draft folder are gone. Mail flow is restored!

All Exchange 2013 Servers become unusable with permissions errors


Overview

The title might sound a bit scary but this one was actually a pretty easy fix. It’s a lesson in not digging yourself into a deeper hole than you’re already in during troubleshooting. I wish I would’ve had this lesson 10yrs ago 🙂

Scenario

The customer was unable to login to OWA, EAC, or Exchange Management Shell on any Exchange 2013 SP1 server in their environment. The errors varied quite a bit, when logging into OWA they would get:

“Something went wrong…

A mailbox could not be found for NT AUTHORITY\SYSTEM.”

When trying to open EMS you would receive a wall of red text which would essentially be complaining about receiving a 500 internal server error from IIS.

In the Application logs I would see an MsExchange BackEndRehydration Event ID 3002 error stating that “NT AUTHORITY\SYSTEM does not have token serialization permission”.

Something definitely seemed to be wrong with Active Directory as this was occurring on all 3 of the customers Exchange 2013 servers; one of which was a DC (more on that later).

Resolution

So one of the 1st questions I like to ask of customers is “when was the last time this was working?” After a bit of investigation I was able to find out that the customer had recently been trying unsuccessfully to create a DAG from his 3 Exchange 2013 SP1 servers. They could get two of the nodes to join but the 3rd would not (the one that was also a DC). The customer thought it was a permissions issue so they had been “making some changes in AD” to try to resolve them. I asked if those changes were documented; the silence was my answer….. 🙂

However, this current issue was affecting all Exchange 2013 servers & not just the one that’s also a DC so I was a bit perplexed as to what could’ve caused this.

So a bit of time on Bing searching for Token Serialization errors brought me to MS KB2898571. The KB stated that if the Exchange Server computer account was a member of a restricted group then Token Serialization Permissions would be set to Deny for it. These Restricted Groups are:

  • Domain Admins
  • Schema Admins
  • Enterprise Admins
  • Organization Management

The KB mentioned running gpresult /scope computer /r on the Exchange servers to see if they were showing as members of any of the restricted groups (see article for further detail & screenshots of the commands). I ran this command on all 3 Exchange 2013 servers & it showed their Computer accounts were all members of the Domain Admins group. In Active Directory Users & Computers I looked at each Exchange Server Computer account (on the Member Of tab) & unfortunately there were no direct ACL assignments so I had to search the membership chain of each common group that the servers were members of. The common groups that all Exchange Server Computer accounts were members of were:

  • Domain Computers
  • Exchange Install Domain Servers
  • Exchange Servers
  • Exchange Trusted Subsystem
  • Managed Availability Servers

Eventually I found that the Exchange Install Domain Servers group had been added as a member of the Domain Admins group during the customers troubleshooting efforts to get all their servers added as DAG members. I removed the Exchange Install Domain Servers group as a member of the Domain Admins group & then rebooted all of the Exchange servers. After the reboots the issues went away & the customer was able to access OWA/EMS.

Now this is where I had to explain to the customer that it was not supported to have an Exchange Server that was also a Domain Controller as a member of a Failover Cluster/DAG. This was why they were having such a hard time adding their Exchange server/DC as a member of their DAG.

Conclusion

I have a saying that I came up with called “troubleblasting”. i.e. “John doesn’t troubleshoot, he troubleblasts!” It started out as just a cheesy joke amongst colleagues back in college but I’ve started to realize just how dangerous it can be. It’s that state you can sometimes get into when you’re desperate, past the point of documenting anything you’re doing out of frustration, & just throwing anything you can up against the wall to see what sticks & resolves your issue. Sometimes it can work out for you but sometimes it can leave you in a state where you’re worse off than when you started. Let this be a lesson to take a breath, re-state what you’re trying to accomplish, & if what you’re doing is really the right thing given the situation. In this case, an environment was brought to its knees because a bit of pre-reading on supportability was not done beforehand & a permission change adversely affected all Exchange 2013 servers.

If you can make it to Exchange Connections in Las Vegas this September, I’ll be presenting a session on “Advanced troubleshooting procedures & tools for Exchange 2013”. Hopefully I can share some tips/tools from the field that have proven useful & can keep you from resorting to the “Troubleblasting Cannon of Desperation” 🙂

Exchange 2013 SP1 Breaks Hub Transport service


I had an issue last night that woke me up at 2 am in the morning by the On Call phone. I feel that we might see this often when Exchange admins start to applying Exchange server 2013 SP1.

After installing Exchange 2013 sp1, MSExchange Transport service hangs at “Starting”, then eventually crashes with couple of event ID’s.

“Event ID 1046, MSExchange TrasnportService

Worker process with process ID 17836 requested the service to terminate with an unhandled exception. “

“Event ID 4999, MSExchange Common

Watson report about to be sent for process id: 2984, with parameters: E12IIS, c-RTL-AMD64, 15.00.0847.032, MSExchangeTransport, M.Exchange.Net, M.E.P.WorkerProcessManager.HandleWorkerExited, M.E.ProcessManager.WorkerProcessRequestedAbnormalTerminationException, 5e2b, 15.00.0847.030. “

Only way to get Transport service to start, is to disable all receive connectors and reboot the server. Does it sound familiar? My colleague Andrew Higginbotham  wrote this article few weeks ago. Although it was a different issue, but custom receive connectors on a multirole server is the key.

In my case, this is also a multirole server(CAS and Mailbox on one box).  Hub Transport service listens on TCP port 2525, and Frontend transport listens on TCP port 25.  There are two custom receive connectors that were created with Hub Transport role. Both are listening on TCP port 25. I’m not sure why they haven’t had external mail flow issue by now, but it sure knows how to get your attention by breaking the transport service.

If we disable both custom receive connectors, transport service starts fine. So we went ahead and changed transport role from Hub Transport to Frontend Transport on both connectors with Set-ReceiveConnector powershell cmdlet, then re-enable them to test. Hub Transport service stays up without issue. Of course, we also rebooted the server to make sure that issue is fixed.

 

 

Edit. Microsoft has released the following KB addressing the issue

http://support.microsoft.com/kb/2958036

Incorrectly Adding New Receive Connector Breaks Exchange 2013 Transport


I feel the concepts surrounding this issue have been mentioned already via other sources (1 2) but I’ve seen at least 5 recent cases where our customers were being adversely impacted by this issue; so it’s worth describing in detail.

Summary:

After creating new Receive Connectors on Multi-Role Exchange 2013 Servers, customers may encounter mail flow/transport issues within a few hours/days. Symptoms such as:

  • Sporadic inability to connect to the server over port 25
  • Mail stuck in the Transport Queue both on the 2013 servers in question but also on other SMTP servers trying to send to/through it
  • NDR’s being generated due to delayed or failed messages

This happens because the Receive Connector was incorrectly created (which is very easy to do), resulting in two services both trying to listen on port 25 (the Microsoft Exchange FrontEnd Transport Service & the Microsoft Exchange Transport Service). The resolution to this issue is to ensure that you specify the proper “TransportRole” value when creating the Receive Connector either via EAC or Shell. You can also edit the Receive Connector after the fact using Set-ReceiveConnector.

Detailed Description:

Historically, Exchange Servers listen on & send via port 25 for SMTP traffic as it’s the industry standard. However, you can listen/send on any port you choose as long as the parties on each end of the transmission agree upon it.

Exchange 2013 brought a new Transport Architecture & without going into a deep dive, the Client Access Server (CAS) role runs the Microsoft Exchange FrontEnd Transport Service which listens/sends on port 25 for SMTP traffic. The Mailbox Server role has the Microsoft Exchange Transport Service which is similar to the Transport Service in previous versions of Exchange & also listens on port 25. There are two other Transport Services (MSExchange Mailbox Delivery & Mailbox Submission) but they aren’t relevant to this discussion.

So what happens when both of these services reside on the same server (like when deploying Multi-Role; which is my recommendation)? In this scenario, the Microsoft Exchange FrontEnd Transport Service listens on port 25, since it is meant to handle inbound/outbound connections with public SMTP servers (which expect to use port 25). Meanwhile, the Microsoft Exchange Transport Service listens on port 2525. Because this service is used for intra-org communications, all other Exchange 2013 servers in the Organization know to send using 2525 (however, 07/10 servers still use port 25 to send to multi-role 2013 servers, which is why Exchange Server Authentication is enabled by default on your default FrontEndTransport Receive Connectors on a Multi-Role box; in case you were wondering).

So when you create a new Receive Connector on a Multi-Role Server, how do you specify which service will handle it? You do so by using the -TransportRole switch via the Shell or by selecting either “Hub Transport” or “FrontEnd Transport” under “Role” when creating the Receive Connector in the EAC.

The problem is there’s nothing keeping you from creating a Receive Connector of Role “Hub Transport” (which it defaults to) that listens on port 25 on a Multi-Role box. What you then have is two different services trying to listen on port 25. This actually works temporarily, due to some .NET magic that I’m not savvy enough to understand, but regardless, eventually it will cause issues. Let’s go through a demo.

Demo:

Here’s the output of Netstat on a 2013 Multi-Role box with default settings. You’ll see MSExchangeFrontEndTransport.exe is listening on port 25 & EdgeTransport.exe is listening on 2525. These processes correspond to the Microsoft Exchange FrontEnd Transport & Microsoft Exchange Transport Services respectively.

1new

Now let’s create a custom Receive Connector, as if we needed it to allow a network device to Anonymously Relay through Exchange (the most common scenario where I’ve seen this issue arise). Notice in the first screenshot, you’ll see the option to specify which Role should handle this Receive Connector. Also notice how Hub Transport is selected by default, as is port 25.

3

4

5

After adding this Receive Connector, see how the output of Netstat differs. We now have two different processes listening on the same port (25).

6

So there’s a simple fix to this. Just use Shell (there’s no GUI option to edit the setting after it’s been created) to modify the existing Receive Connector to be handled by the MSExchange FrontEndTransport Service instead of the MSExchange Transport Service. Use the following command:

Set-ReceiveConnector Test-Relay –TransportRole FrontEndTransport

7

I recommend you restart both Transport Services afterwards.

 

 

Update: In recent releases of Exchange 2013 (unsure which CU this fix was implemented in), the EAC will no longer let you mis configure a receive connector in this way. So hopefully we should see less of this issue.

 

DatabaseCopyAutoActivationPolicy Setting Breaks Client Access in Exchange 2013 CU2


This issue comes fresh from a Microsoft Crit-Sit case I was just on for one of our customers.

Issue:

All client access was broken (specifically OWA) on a standalone Multi-Role Exchange 2013 CU2 Server. User’s would receive “The website cannot display the page” after authenticating to OWA. This started after the customer installed CU2.

Also, if you look in the HTTPProxy logs (C:\Program Files\Microsoft\Exchange Server\V15\Logging\HttpProxy\Owa) you would see the following error:

“ServerLocatorError,POST,,,,, The database with ID 5105a9bc-cfcd-4842-baaf-451561550e08 couldn’t be found. —> Microsoft.Exchange.Data.ApplicationLogic.Cafe.MailboxServerLocatorException: The database with ID 5105a9bc-cfcd-4842-baaf-451561550e08 couldn’t be found

Full Error:
Capture

 

Resolution:

After a night’s worth of troubleshooting we escalated to Tier 3 in Microsoft Support & our resolution came from a setting I would not at all have expected. Before installing CU2 the customer had read a blog stating some maintenance steps he should perform on his Exchange Server beforehand. One of them was running “Set-MailboxServer -Identity EXServerName -DatabaseCopyAutoActivationPolicy Blocked”. This customer did not have a DAG so this command was not needed but nonetheless this command should have absolutely no ill effect on the ability of CAS to proxy requests to the mailbox server components. All this command should do is tell the DAG that no mailbox database copies can be automatically activated on this server. It would take an admin action to override this & activate the database. But again, no DAG so it should not matter.

However in this case it was causing CAS to break as it could not find the mailbox database. I was able to replicate this issue in my own lab by setting my DatabaseCopyAutoActivationPolicy to Blocked on my two Exchange 2013 CU2 Mailbox Servers (also not in a DAG so the setting “should” not matter). After making the change & restarting some services I was greeted with the very same errors when trying to login to OWA. I also received the very same “ServerLocatorError” “The database with ID <GUID> couldn’t be found”.

So the resolution in this case is to just run “Set-MailboxServer -Identity EXServerName -DatabaseCopyAutoActivationPolicy Unrestricted

I was told Microsoft Support would escalate this internally but I am currently unsure if this affects only CU2 or all Exchange 2013 builds as my lab is only 2013 CU2. I’m also unsure if this only affects multi-role servers or only servers not in a DAG but I hope to test & report the findings.

Update: I’ve been told by others that this setting has this same impact on CU1 systems.

Update#2: We tried asking MS Support to classify this as a bug so it would be fixed (also so our support bill would be compensated as is the case with all bugs). However, they would not agree to classify this as a bug. The answer from Support was “the fact that is was not easy to find is simply due to the complexity/functionality of our product”. We were told that if we wanted to push harder to classify it as a bug then we would first have to write-up a business impact statement & then it could be tested/researched internally. However, if the Product Team did not deem it a bug then we would be charged for the hours spent testing. I’m pretty disappointed in this response.

Update#3 I’ve tested & this still affects CU3 systems.

Update#4 I’ve tested & this still affects SP1 systems.

Once again, Unchecking IPv6 on a NIC Breaks Exchange 2013


Background:

It seems like this sentiment has been preached widely but yet I still see customers do this. In fact I’m writing this today because earlier this week I had a customer who’s Information Store Service, as well as the Exchange Transport Services, on Exchange 2013 would not start. Then earlier today a coworker actually did this in a lab which caused the same issue.

Summary:

Let’s start off with this, The Exchange Server Product Team performs Zero testing or validation on systems with IPv6 Disabled. So that right there should be a good indicator that you’re trailblazing on your own in the land of Exchange (bring a flashlight, it’s dark & scary).

So I’m going to cover two very different things here:

  • Unchecking IPv6 on the NIC adapter (BAD)
  • Properly Disabling IPv6 in the registry (Ok but not recommended by MS)

Unchecking Method (BAD):

Let’s first talk about un-checking IPv6 on your NIC adapters. The problem with this is while the OS still thinks it can & should be using IPv6, the NIC is unable to do so which leads to communications issues. An easy way to test that your OS is still trying to use IPv6 is to ping localhost after you have unchecked IPv6 on your NIC & rebooted. You’re see that you still get an IPv6 response. I actually did a write-up about this topic on the Sysadmin community on Reddit awhile back which you can find here. As a side note, check out the Exchange community a colleague & I moderate on reddit here.

While doing this has always caused sporadic issues with Exchange, Exchange 2013 seems to be even more sensitive in this regard. Since RTM, I’ve seen half a dozen Exchange 2013 issues that were resolved by re-checking IPv6 on the NIC adapter & rebooting. Here’s what I’ve seen so far:

  • Having Ipv6 unchecked when performing an Exchange 2013 install will result in a failed/incomplete installation which will result in having to perform a messy cleanup operation before you can continue.
  • Microsoft Exchange Active Directory Topology Service may not start if the Exchange 2013 server is also a Domain Controller and IPv6 has been unchecked. The solution is to re-check it & reboot the server.
  • Microsoft Exchange Transport Service as well as the Microsoft Exchange Frontend Transport, Microsoft Exchange Transport Submission, & Microsoft Exchange Transport Delivery services may not start if IPv6 has been unchecked on the NIC adapter of an Exchange 2013 Server.
  • Microsoft Exchange Information Store Service may not start if IPv6 has been unchecked on an Exchange 2013 Server.
  • NEW – See MVP Michael Van Horenbeeck’s post on how this can break the Hybrid Configuration Wizard

Disabling IPv6 in the Registry:

I started this post saying that MS does no testing or validation for systems with IPv6 disabled in ANY WAY. However, some customers may actually have reasons for disabling Ipv6. I’m actually interested in hearing them but I also know some customers are very adamant about it. There actually was an issue in the past where Outlook Anywhere wouldn’t work in certain scenarios with IPv6 enabled but this should not be a problem with a fully updated Exchange Server (reference).

I’ll also say that I personally have never had any issues with properly disabling IPv6 in the registry using this method. You basically add a DisabledComponents key to the registry with a value of 8 F’s (ffffffff) & then reboot the server. After this point IPv6 should be fully disabled. I’ve also spoken with a couple Microsoft Support Engineers who have also said that they have personally never seen any issues with disabling it this way; with Windows or Exchange. However, in my opinion you should have a good reason for doing so (and saying you don’t like IPv6 is NOT a good reason).

Lastly, I’d like to add that if you’re utilizing iSCSI on your Exchange server, there should be no issues with unchecking IPv6 on your iSCSI NICs if you choose to do so. The article was specifically in relation to NICs connected to your production/public/MAPI networks. As usual, follow your SAN vendor’s best practices when configuring iSCSI NICs.

Also, here’s a shameless plug for the ExchangeServer subreddit (http://www.reddit.com/r/exchangeserver) which I help moderate (username=ashdrewness). There’s always people such as myself answering questions on there.