NIC DNS Registration and Exchange Servers


Symptom

I recently worked with a customer who had introduced an Exchange 2013 Server into an existing Exchange 2007 environment. The issue was the 2013 Server was unable to send email anywhere; neither externally or to other Exchange Servers. If you executed the below command to view the status of the transport queues you received the below output:

Get-Queue <Queue Identity> | FL

NIC

Specifically, the error message you would receive is “4.4.0 DNS query failed. The error was: DNS query failed with error ErrorRetry”

This is a fairly common error indicating there is an issue contacting the DNS Server or Servers that Exchange is configured to use. ReferenceA ReferenceB

Resolution

However, in this case the issue was not obvious, unless you had already seen this issue before or knew a little bit about the health checks Exchange uses to ensure it’s healthy.

I remembered seeing a similar issue on a Reddit thread awhile back, which caused me to search and find this Microsoft KB article titled “DNS query failed” error when an email message is stuck in the Draft folder in an Exchange Server 2013 environment”.

This was the resolution in my scenario as well. To resolve the issue, I simply had to re-check the “Register this connection’s addresses in DNS” option on the IPv4> Properties>Advanced>DNS tab on the primary NIC used for Active Directory communications. While you can uncheck this box on secondary NICs (such as for iSCSI, Replication, Backup, etc.), it should always remain checked on the MAPI/Primary NIC. I’ve also seen issues where having this unchecked on a 2013/2016 DAG node will result in Managed Availability-triggered database failovers.

Emails from scanner to Exchange 2013 being sent as separate attachment


Scenario

After switching from hosted email to Exchange 2013 on-premises, a customer noticed that when using scan-to-email functionality the .PDF files it created were not showing up as expected. Specifically, instead of an email being received with the .PDF attachment of the scanned document, they were receiving the entire original message as an attachment (which then contained the .PDF).

When the scanner was configured to send to an external recipient (Gmail in this case), the issue did not occur & the message was formatted as expected. The message was still being relayed through Exchnage, it was just the recipient that made the difference. See the below screenshots for examples of each:

What the customer was seeing (incorrect format)

A

What the customer expected to see (correct format)

B

This may not seem like a big issue but it resulted in users on certain mobile devices not being able to view the attachments properly.

Troubleshooting Steps

There were a couple references on the MS forums to similar issues with older versions of 2013, but this server was updated. My next path was to see if there were any Transport Agents installed that could’ve been causing these messages to be modified. I used many of the steps in my previous blog post “Common Support Issues with Transport Agents” including disabling two 3rd party agents & restarting the Transport Service; the issue remained.

My next step was to disable both of the customer’s two Transport Rules (Get-TransportRule | Disable-TransportRule); one was related to managing attachment size while the other appended a disclaimer to all emails. This worked! By process of elimination I was able to determine it was the disclaimer rule causing the messages to be modified.

Resolution

Looking through the settings of the rule the first thing that caught my eye was the Fallback Option of “Wrap”. Per this article from fellow MVP Pat Richard, Wrap will cause Exchange to attach the original message & then generate a new message with our disclaimer in it (sounds like our issue).

C

However, making this change did not fix the issue, much to my bewilderment. There seemed to be something about the format of the email that Exchange did not like; probably caused by the formatting/encoding the scanner was using.

Ultimately, the customer was fine with simply adding an exception to the Transport Rule stating to not apply the rule to messages coming from the scanner sender email address.

D

 

Bad NIC Settings Cause Internal Messages to Queue with 451 4.4.0 DNS query failed (nonexistent domain)


Overview:

I’ve come across this with customers a few times now & it can be a real head scratcher. However, the resolution is actually pretty simple.

 

Scenario:

Customer has multiple Exchange servers in the environment, or has just installed a 2nd Exchange server into the environment. Customer is able to send directly out & receive in from the internet just fine but is unable to send email to/through another internal Exchange server.

This issue may also manifest itself as intermittent delays in sending between internal Exchange servers.

In either scenario, messages will be seen queuing & if you run a “Get-Queue –Identity QueueID | Formal-List” you will see a “LastError” of “451 4.4.0 DNS query failed. The error was: SMTPSEND.DNS.NonExistentDomain; nonexistent domain”.

 

Resolution:

This issue can occur because the Properties of the Exchange Server’s NIC have an external DNS server listed in them. Removing the external DNS server/servers & leaving only internal (Microsoft DNS/Active Directory Domain Controllers in most customer environments) DNS Servers; followed by restarting the Microsoft Exchange Transport Service should resolve the issue.

 

Summary:

The Default Configuration of an Exchange Server is to use the local Network Adapter’s DNS settings for Transport Service lookups.

(FYI: You can alter this in Exchange 07/10 via EMS using the Set-TransportServer command or in EMC>Server Configuration>Hub Transport>Properties of Server. Or in Exchange 2013 via EMS using the Set-TransportService command or via EAC>Servers>Edit Server>DNS Lookups. Using any of these methods, you can have Exchange use a specific DNS Server.)

Because the default behavior is to use the local network adapter’s DNS settings, Exchange was finding itself using external DNS servers for name resolution. Now this seemed to work fine when it had to resolve external domains/recipients but a public DNS server would likely have no idea what your internal Exchange servers (i.e. Ex10.contoso.local) resolve to.The error we see is due to the DNS server responding, but it just not having the A record for the internal host that we require. If the DNS server you had configured didn’t exist or wasn’t reachable you would actually see slightly different behavior (like messages sitting in “Ready” status in their respective queues).

 

An Exchange server, or any Domain-joined server for that matter, should not have its NICs DNS settings set to an external/ISPs DNS server (even as secondary). Instead, they should be set to internal DNS servers which have all the necessary records to discover internal Exchange servers.

 

References

http://support.microsoft.com/kb/825036

http://technet.microsoft.com/en-us/library/bb124896(v=EXCHG.80).aspx

“The DNS server address that is configured on the IP properties should be the DNS server that is used to register Active Directory records.”

http://technet.microsoft.com/en-us/library/aa997166(v=exchg.80).aspx

http://exchangeserverpro.com/exchange-2013-manually-configure-dns-lookups/

http://thoughtsofanidlemind.com/2013/03/25/exchange-2013-dns-stuck-messages/

 

Incorrectly Adding New Receive Connector Breaks Exchange 2013 Transport


I feel the concepts surrounding this issue have been mentioned already via other sources (1 2) but I’ve seen at least 5 recent cases where our customers were being adversely impacted by this issue; so it’s worth describing in detail.

Summary:

After creating new Receive Connectors on Multi-Role Exchange 2013 Servers, customers may encounter mail flow/transport issues within a few hours/days. Symptoms such as:

  • Sporadic inability to connect to the server over port 25
  • Mail stuck in the Transport Queue both on the 2013 servers in question but also on other SMTP servers trying to send to/through it
  • NDR’s being generated due to delayed or failed messages

This happens because the Receive Connector was incorrectly created (which is very easy to do), resulting in two services both trying to listen on port 25 (the Microsoft Exchange FrontEnd Transport Service & the Microsoft Exchange Transport Service). The resolution to this issue is to ensure that you specify the proper “TransportRole” value when creating the Receive Connector either via EAC or Shell. You can also edit the Receive Connector after the fact using Set-ReceiveConnector.

Detailed Description:

Historically, Exchange Servers listen on & send via port 25 for SMTP traffic as it’s the industry standard. However, you can listen/send on any port you choose as long as the parties on each end of the transmission agree upon it.

Exchange 2013 brought a new Transport Architecture & without going into a deep dive, the Client Access Server (CAS) role runs the Microsoft Exchange FrontEnd Transport Service which listens/sends on port 25 for SMTP traffic. The Mailbox Server role has the Microsoft Exchange Transport Service which is similar to the Transport Service in previous versions of Exchange & also listens on port 25. There are two other Transport Services (MSExchange Mailbox Delivery & Mailbox Submission) but they aren’t relevant to this discussion.

So what happens when both of these services reside on the same server (like when deploying Multi-Role; which is my recommendation)? In this scenario, the Microsoft Exchange FrontEnd Transport Service listens on port 25, since it is meant to handle inbound/outbound connections with public SMTP servers (which expect to use port 25). Meanwhile, the Microsoft Exchange Transport Service listens on port 2525. Because this service is used for intra-org communications, all other Exchange 2013 servers in the Organization know to send using 2525 (however, 07/10 servers still use port 25 to send to multi-role 2013 servers, which is why Exchange Server Authentication is enabled by default on your default FrontEndTransport Receive Connectors on a Multi-Role box; in case you were wondering).

So when you create a new Receive Connector on a Multi-Role Server, how do you specify which service will handle it? You do so by using the -TransportRole switch via the Shell or by selecting either “Hub Transport” or “FrontEnd Transport” under “Role” when creating the Receive Connector in the EAC.

The problem is there’s nothing keeping you from creating a Receive Connector of Role “Hub Transport” (which it defaults to) that listens on port 25 on a Multi-Role box. What you then have is two different services trying to listen on port 25. This actually works temporarily, due to some .NET magic that I’m not savvy enough to understand, but regardless, eventually it will cause issues. Let’s go through a demo.

Demo:

Here’s the output of Netstat on a 2013 Multi-Role box with default settings. You’ll see MSExchangeFrontEndTransport.exe is listening on port 25 & EdgeTransport.exe is listening on 2525. These processes correspond to the Microsoft Exchange FrontEnd Transport & Microsoft Exchange Transport Services respectively.

1new

Now let’s create a custom Receive Connector, as if we needed it to allow a network device to Anonymously Relay through Exchange (the most common scenario where I’ve seen this issue arise). Notice in the first screenshot, you’ll see the option to specify which Role should handle this Receive Connector. Also notice how Hub Transport is selected by default, as is port 25.

3

4

5

After adding this Receive Connector, see how the output of Netstat differs. We now have two different processes listening on the same port (25).

6

So there’s a simple fix to this. Just use Shell (there’s no GUI option to edit the setting after it’s been created) to modify the existing Receive Connector to be handled by the MSExchange FrontEndTransport Service instead of the MSExchange Transport Service. Use the following command:

Set-ReceiveConnector Test-Relay –TransportRole FrontEndTransport

7

I recommend you restart both Transport Services afterwards.

 

 

Update: In recent releases of Exchange 2013 (unsure which CU this fix was implemented in), the EAC will no longer let you mis configure a receive connector in this way. So hopefully we should see less of this issue.