Wednesday, September 26, 2012

Removing the requirement to specify domain name with Single Signon for Remote Desktop Services

Windows 2008 R2 Remote Desktop Services single signon provides the users the ability to login to RD Web Access and launch applications without having to provide login credentials twice.  While single sign on is great it does not just work out of the box, there are a few things you need to do to configure single sign on.  These steps are documented on the following blog post:

http://blogs.msdn.com/b/rds/archive/2009/08/11/introducing-web-single-sign-on-for-remoteapp-and-desktop-connections.aspx

What is not documented however is for single sign on to work by default, users must login with:

DOMAIN\Username

For example:


If a user logs in with just the user name such bugs.bunny as show in the screenshot below, when the user enters the RD Web Access and attempts to launch a remote application the user will receive the error below.

 
Error experienced:

Your computer can't connect to the remote computer because an error occured on the remote computer that you want to connect to.  Contact your network administrator for assistance.


Note: This error message is generic and is presented for a wide range of problems relating to RDS.

For my this Active Directory we want users to login by simply entering their username, we do not want users to have to specify their domain name.  To do this perform the following procedure:

 

1.     Login to the Remote Desktop Web Access role-based server with local/Domain administrative permissions.

2.     Navigate to the following location:

 %windir%\Web\RDWeb\Pages\The Language of Your Location\

3.     Backup the login.aspx file to another location.

4.     Right click the login.aspx file, and select Edit. The file will be opened with Edit status in your default HTML editor.

5.     Change the original code section:

input id=”DomainUserName” name=”DomainUserName” type=”text” class=”textInputField” runat=”server” size=”25” autocomplete=”off” /

to be:

input id=”DomainUserName” name=”DomainUserName” type=”text” class=”textInputField” runat=”server” size=”25” autocomplete=”off” value=”domainname\” /
 
6.     Save the modification.


Now when users access the RD Web Access portal the username field will already be populated with domainname\

 

Sunday, September 23, 2012

Exchange 2010 Randomly Loosing Access to Active Directory

I had an issue at a customer site where a vitalised multi role Exchange 2010 server was randomly loosing access to Active Directory.  There were two Active Directory Domain Controllers with the Global Catalog role in the same Active Directory site as the Exchange 2010 server with highspeed 1gbps LAN between the servers.

When the issue occured Exchange 2010 would begin spitting the generic errors you receive whenever there is no Active Directory domain controller available.  Some of these errors include:

Log Name:      Application
Source:        MSExchange ADAccess
Date:          13/08/2012 8:58:37 AM
Event ID:      2114
Task Category: Topology
Level:         Error
Keywords:      Classic
User:          N/A
Computer:      Exchange2010.domain.local
Description:
Process STORE.EXE (PID=3788). Topology discovery failed, error 0x80040952 (LDAP_LOCAL_ERROR (Client-side internal error or bad LDAP message)). Look up the Lightweight Directory Access Protocol (LDAP) error code specified in the event description. To do this, use Microsoft Knowledge Base article 218185, "Microsoft LDAP Error Codes." Use the information in that article to learn more about the cause and resolution to this error. Use the Ping or PathPing command-line tools to test network connectivity to local domain controllers.




Log Name:      Application
Source:        MSExchange ADAccess
Date:          13/08/2012 9:01:56 AM
Event ID:      2103
Task Category: Topology
Level:         Error
Keywords:      Classic
User:          N/A
Computer:      Exchange2010.domain.local
Description:
Process MSEXCHANGEADTOPOLOGYSERVICE.EXE (PID=1468). All Global Catalog Servers in forest DC=internal,DC=domain,DC=com are not responding:
DC1.domain.local
DC2.domain.local



Log Name:      Application
Source:        MSExchange ADAccess
Date:          13/08/2012 9:04:56 AM
Event ID:      2604
Task Category: General
Level:         Error
Keywords:      Classic
User:          N/A
Computer:      Exchange2010.domain.local
Description:
Process MSEXCHANGEADTOPOLOGY (PID=1468). When updating security for a remote procedure call (RPC) access for the Microsoft Exchange Active Directory Topology service, Exchange could not retrieve the security descriptor for Exchange server object Exchange2010 - Error code=80040934.
 The Microsoft Exchange Active Directory Topology service will continue starting with limited permissions.



Log Name:      Application
Source:        MSExchange ADAccess
Date:          13/08/2012 9:07:56 AM
Event ID:      2501
Task Category: General
Level:         Error
Keywords:      Classic
User:          N/A
Computer:      Exchange2010.domain.local
Description:
Process MSEXCHANGEADTOPOLOGY (PID=1468). The site monitor API was unable to verify the site name for this Exchange computer - Call=HrSearch Error code=80040934. Make sure that Exchange server is correctly registered on the DNS server.


 
When this issue was occuring I verified that the Exchange 2010 server was successfully talking to a domain controller in the same Active Directory site by issuing the following command from a command prompt:
 
NLTEST /DSGETDC:domain.local
 
The problem was with the Exchange 2010 application itself randomly loosing access to Active Directory.
 
After further diagnosing I made the following changes to the Windows TCP Network stack on the Exchange2010 server:
 
netsh int tcp set global chimney=disabled
netsh int tcp set global rss=disabled
netsh int tcp set global taskoffload=disabled
netsh int tcp set global autotuninglevel=disabled
 
 
This resolved the problem.

Only run these commands on your Exchange 2010 server if you are sure that there is a Active Directory Domain Controller in the same Active Directory site as your Exchange 2010 server and the Exchange 2010 server is able to communicate with the Active Directory domain controller.  Ensure you diagnose all other possible resolutions first such as network/storage/cpu/memory bottlenecks.

Hope this post has been helpful.

Thursday, September 20, 2012

Reset Backup Exec 2012 to Factory Defaults

Symantec has published an article on how to restore Backup Exec to factory defaults using BEUtility.exe however this article only works for  Backup Exec 11D/12.0/12.5/2010/2010R2/2010R3.  This article can be found here:

http://www.symantec.com/business/support/index?page=content&id=TECH66780

When trying to perform this procedure on Backup Exec 2012 it fails with the following error message:

Error: Unable to drop database


This is a bug with Backup Exec 2012 however there is a work around, to restore the Database to Factory Defaults perform the following procedure:

1. Stop all backup exec services.
2. Go to C:\program files\symantec\backup exec\data
3. Rename the current database file bedb_dat.mdf to something else.
4. Rename the current log file database  bedb_log.ldf to something else
5. Now use bedb_dat.bak file and rename it to bedb_dat.mdf
6. Now use the bedb_log.bak file and rename it to bedb_log.ldf
7. Now restart all backup exec services.


Monday, September 17, 2012

View Mailbox Sizes for Exchange 2003 and Exchange 2010 through Powershell

If you need to view mailbox sizes for users in your Exchange organisation, you can do this from an Exchange Management Shell (EMS) for both Exchange 2003 and Exchange 2007/2010.

For your Exchange 2007/2010 users use the following command from EMS:

get-mailboxstatistics | fl displayname,totalitemsize

For your Exchange 2003 users use the following command from EMS:

Get-Wmiobject -namespace root\MicrosoftExchangeV2 -class exchange_mailbox -computer Ex2003ServerName | sort -desc size | select storageGroupName,StoreName,MailboxDisplayName,Size,TotalItems

Wednesday, September 12, 2012

Microsoft Axes the Forefront Product Suite

Today, Microsoft has announced that the Forefront product suite is no longer being continued.  Gartner started the rumours quite some time ago claiming that Microsoft was no longer going to continue Threat Management Gateway, however who would have thought this was to extend to the entire Forefront product suite.

Please read the following article by Forefront TMG MVP, Richard Hicks:

http://tmgblog.richardhicks.com/2012/09/12/forefront-tmg-2010-end-of-life-statement/

For the official annoucement from Microsoft please see:

http://blogs.technet.com/b/server-cloud/archive/2012/09/12/important-changes-to-forefront-product-roadmaps.aspx

Outlook can finally deal with Passwords Expiring

Outlook has never had the ability to deal with passwords expiring, until now!  The Microsoft Outlook team has released updates for Outlook 2010 and 2007 that provide Office 365 users with password expiration notifications. The advance password expiry notification will be displayed in a pop-up message (near the system clock) within a certain time period before their password actually expires. That time period is configurable by the tenant admin (see links below for more info). For users whose passwords have already expired, Outlook will flash an error message when users try to connect to their mailbox. In both scenarios, Outlook also provides a link (URL) to update passwords via the browser. When users click on those links, they are taken to the Microsoft Online Portal to change/update their passwords.

Very cool!

The knowledge base article for this update can be found under the KB2745588

Tuesday, September 11, 2012

Windows Server 2012 IIS8 Server Name Indication

A new version of Windows Server is about to become upon us, Windows Server 2012 and with this a new version of Internet Information Services (IIS), version 8.  IIS8 comes with a new cool feature called Server Name Indication (SNI).

In pervious versions of 5, 6, 7, 7.5 etc we have always had the ability to host multiple web sites under same IP address/port using HTTP/1.1 virtual hosting, i.e."Host Headers" where the web server looks at the DNS address entered into the Internet Browser and forwards the user to the appropriate site.  Of course if a user accesses a website by IP, the Host Header will not work.

IIS has supported utilising Host Headers for HTTPS sites also for quite some time, however this has always been harder to configure with manual editing of the IIS metabase being required in previous versions of IIS, see http://clintboessen.blogspot.com.au/2009/03/how-to-setup-ssl-host-headers-iis6.html.  However although SSL Host Headers were supported there was one problem which administrators faced.  There was no way to sign a different digital certificate for each HTTPS website.

Now with IIS8 in Windows Server 2012, a new feature has been added tha extends the SSL and TLS protocols to indicate what hostname the client is attempting to connect to at the start of the handshaking process.  This allows the IIS8 server to present multiple certificates on the same IP address and port number and hence allows multiple secure (HTTPS) websites to be served off the same IP address without requiring all those sites to use the same certificate.  Multiple digital certificates assigned to the same the same IP/Port - very cool.

I'm sure we will see many changes to applications which leverage IIS adopting this new technology.

Monday, September 10, 2012

Internal Names and Public Certificates

I have just found out today that internal domain names are no longer supported on public certificates.  Please view the following article by DigiCert.

http://www.digicert.com/internal-names.htm

For Exchange this is going to increase the requirement for split DNS within organisations to ensure customers can use the same address for both the Internal and External URLs.  However there are examples which I can see as being a problem moving forward.

When setting up a Remote Desktop Gateway server (for RDP over HTTPS) you need two public certificates, or one certificate with multiple subject alternative names. One public certificate will terminate the SSL endpoint of the RD Gateway server such as "rdpgateway.example.com" and is enabled within Internet Information Services.  The second certificate requires the internal name of the Remote Desktop Session Hosts or Terminal Servers to ensure the RDP traffic is digitally signed such as "terminalserver01.domain.local". This server certificate needs to be installed on the terminal server(s) themselves with the name matching the internal FQDN of the server(s).  Most companies do not install digital certificates to sign RDP traffic, instead they leave the default self-signed certificate on the servers (which does not show up in the local MMC certificates store).  This is why you always see the following warning when initiating remote desktop to a server:


Now we could use an internal certificate authority to issue the certificates for our RD Session Hosts, however this would require that all computers who access the RD Farm to be on the Active Directory domain to ensure they trust the internal certificate authority.  What about if there are users who are connecting in from machines that are not a member of the Active Directory domain?  One of my clients develops an application and sells the application by presenting it to clients as a RemoteApp meaning computers all over the world are launching this application.  Without having a public certificate containing internal names, my customers would receive warnings relating to the RDP traffic being untrusted.

I spoke to a representative from DigiCert about this today, and I ran this example past him.  The advise he presented to me was to rename the Active Directory forest to "local.example.com" to ensure the domain ended with a dot com.  I do not see this as practical especially for large Active Directory domains which consist of thousands of users.

I wonder what other headaches these changes to the certificate standard will present for IT professionals around the world.

Please feel free to leave your comments on the matter.

Tuesday, September 4, 2012

The Limit for Outlook OST Files

How big can your Outlook OST file grow for cached Exchange mode?  Well the answer to this is BIG.  Outlook 2003/2007 out of the box has a 20GB limit on OST files, while Outlook 2010 has a 50GB limit on OST files.

This is documented by Microsoft on the following KB article:

http://support.microsoft.com/kb/832925

Whilst these limits have been put in place they can be extended by modifying the MaxLargeFileSize DWORD registry value located under the following location:

Outlook 2010

The policy location for the registry entries is located in the following path in Registry Editor:
HKEY_CURRENT_USER\Software\Policies\Microsoft\Office\14.0\Outlook\PST

The user preference location for the registry entries is located in the following path in Registry Editor:
HKEY_CURRENT_USER\Software\Microsoft\Office\14.0\Outlook\PST

Outlook 2007

The policy location for the registry entries is located in the following path in Registry Editor:
HKEY_CURRENT_USER\Software\Policies\Microsoft\Office\12.0\Outlook\PST

The user preference location for the registry entries is located in the following path in Registry Editor:
HKEY_CURRENT_USER\Software\Microsoft\Office\12.0\Outlook\PST

Outlook 2003

The policy location for the registry entries is located in the following path in Registry Editor:
HKEY_CURRENT_USER\Software\Policies\Microsoft\Office\11.0\Outlook\PST

The user preference location for the registry entries is located in the following path in Registry Editor:
HKEY_CURRENT_USER\Software\Microsoft\Office\11.0\Outlook\PST

So whats this harp about PST files and OST files limited to 2GB in size?  ANSI (the format previously used for OST/PST files) is limited to 2GB in size - and it does not handle hitting this limit very well.

The new format which is used is Unicode - the actual Unicode limit is unknown.  We do believe it is in the TB, perhaps around 4 TB, but we have never tested (nor have we ever been able to test) to find the limit for performance reasons.

Move Messages to Another Working Queue

In the event a Hub Transport server is completely out, you may have the requirement to move all messages in a queue to another Hub Transport server in your organisation to ensure the messages are delivered.  How can you do this?

First you need to export all messages in the current queue.  You can do this with the following powershell commands:

$array = @(Get-Message -Queue "QueueName" -ResultSize unlimited)

$array | ForEach-Object {$i++;Export-Message $_.Identity | AssembleMessage -Path ("c:\MailsExport\"+ $i +".eml")}

To import the messages into the new Hub Transport server, simply place the .eml files into the Transport Pickup folder.  The new server should immediately start processing the messages.