Author Archives: Joe Stocker

Offline Root CA’s require periodic maintenance

In most environments where an offline Root CA is used, it must come back online once every 7 months to provide the Subordinate CA’s with an update CRL list. If this does not happen, the Subordinate CA will stop issuing certificates. The actual CA service on the Subordinate will no longer startup and the error message will be “The revocation function was unable to check revocation because the revocation server was offline”

I recommend performing the following steps every 6 months (to allow for a 30 day cushion)

1. Power up the Offline Root CA

2. On the Offline Root, run this command:
c:\windows\system32\certsrv\certenroll\certutil –crl

3. The command above will re-issue the CRL. Now copy the CRL from the c:\windows\system32\certsrv\certenroll directory to the Subordinate Issuing CA

4. The next step is to install the CRL into the Subordinate CA with this command:

Certutil –addstore CA <name of file>

CA best practices and maintenance procedures are located here:
http://technet.microsoft.com/en-us/library/cc782041(v=ws.10).aspx

How to prevent Ransomware from infecting your Enterprise Applications

Everyone has heard of Spyware and Malware. Ransomware is becoming an all too familiar term but I feel many IT Organizations assume it is a threat isolated to consumers and not Enterprises. In my opinion, I think most IT Organizations are uneducated about the attack vectors that Ransomware can use to infect an IT Infrastructure.

Case in point, most companies that I interact with do not prevent their IT System Administrators from using Internet Explorer (or other web browsers) from the console of their servers. Only a handful of companies that I have encountered over the years actually restrict outbound TCP connections on the firewall to thwart IT Sys Admins from web browsing on server consoles.

Why is this significant, and how does this behavior relate to the topic of Ransomware? This is the attack vector that most IT Organizations are unaware of. Most of the IT Systems Administrators that I have encountered have justified their behavior of using a web browser on a server by stating that they are smart enough to only browse “Safe” websites to download hotfixes, patches, or search for error messages on IT forums. It is that false assumption that can allow Ransomware to infect an Enterprise. I will explain below how an Enterprise Application such as Microsoft Exchange Server could be taken down by such behavior.

The alarm needs to be sounded because leading security researchers have proven that the most successful attack vectors are being exploited by hackers who are placing advertisements on legitimate web sites. You could be browsing a completely legitimate and “Trusted” web site, but because of an advertisement that contains malicious code, your web browser is now the attack vector that downloads an attack payload into your Infrastructure! Today on June 10th 2014, Microsoft released hotfixes for 59 vulnerabilities in Internet Explorer. This shows you that attackers are going after the web browser to target the enterprise! Hackers are smart enough to not hit an Enterprise head on by attacking the firewall. Instead they target the weak points in the infrastructure, namely the end user who is browsing legitimate web sites. Some of these vulnerabilities are “zero day”, meaning that attackers have discovered the vulnerability before the good guys and no patch is available to fix the problem. These types of lurking vulnerabilities can lay dormant on a web server for weeks or months before being discovered.

Now, imagine if one of your Domain Administrators browsed a legitimate web site which contained an advertisement placed by a hacker?  It is safe to assume that any server that Domain Admin had access could now be “owned” by Ransomware, because most of the recent advanced persistent threats (APT’s) spread by multiple attack vectors once they infect just a single host.  Once ransomware lands on a host, the only way to unlock the data is to pay the ransom! When searching for products to remove the ransomware, use caution, because most of these so called cures are actually viruses that masquerade as ransomware removal tools!

I think most readers would agree with me that we are now talking about a very real scenario, because we are talking about legitimate websites that have been compromised with advertisements. IT Sys Admins that use privileged accounts and perform web browsing to search for solutions to error messages (a common IT Sysadmin task) are the most at risk.  They are exposed when browsing to download patches or drivers onto a server from the internet, because it is more convenient for them than copying it over the network from their workstation.

I highly recommend reading this Cyber Heist newsletter (not from your server console, and not while logged in with your Domain Administrator account!). In this newsletter, the author describes the latest advances in ransomware and I promise it will open your eyes to just how bad things have gotten! I don’t blame you if you were too paranoid to click on the link after reading this blog. =)

 

The threat to Enterprise Applications: Case Study: Microsoft Exchange

The Microsoft Exchange “Preferred Architecture” was published by Microsoft on April 21st 2014 and recommends against traditional backups. I think you know where this is going if you read the Cyber Heist Newsletter referenced above.

“With all of these technologies in play, traditional backups are unnecessary; as a result, the PA leverages Exchange Native Data Protection.”

Gulp.

The limitation of Exchange Native Data Protection (mailbox replication) is that all copies of the mailbox data are accessible from the Layer3 IP network (a requirement for replication to work). The doomsday scenario is a worm or skilled hacker could destroy or “ransom” all copies of the data. This would leave an organization with 100% data loss. Not only is Office 365 susceptible to this threat, but all customers who follow Microsoft’s preferred architecture.

Therefore, Exchange Administrators should carefully consider the risk of a worm or hacker before completely eliminating traditional backups. All other layers in your defense in depth security apparatus better be air tight! For example, you would have less risk if you deploy a whitelisting solutions from Bit9 Lumension, or Microsoft Applocker. However, it’s nearly impossible to eliminate all risk because according to McAfee Phishing Quiz, 65 percent of respondents can’t properly identify email scams. Theoretically, the human responsible for making decisions on what to allow into the whitelist could be tricked into allowing ransomware to be trusted.

 

Prevention

  1. To reduce the risk of ransomware spreading to servers, prevent IT Administrators from being able to browse web pages while logged onto a server. If servers are located in a separate IP subnet, create an ACL to block outbound 80 and 443 requests from the server subnet. The caveat is you could potentially break applications that rely on external connections to the internet. Therefore you could enable the ACL with logging mode enabled so you could then create a whitelist of allowed sites and then block everything else. The downside is this will increase the administrative burden of the firewall administrator to maintain the ACL. However, the alternative of permitting an IT Administrator to browse websites while logged onto servers is to accept the risk of of infecting the entire Server farm with a worm, virus or Ransomware.
  2. Create an IT Policy for Administrators to sign where they will not browse the internet using privileged accounts such as Domain Admin credentials on any workstation. Consider deploying a proxy server that uses Radius or Windows Authentication, and only allow a global group that does not contain these admin accounts.
  3. Research commercially available whitelisting solutions (ex: Bit9 Lumension, or Microsoft Applocker).

This approach would not prevent all worms, ransomware and hackers from getting onto your servers, because modern advanced persistent threats (APTs) will spread and distribute themselves across multiple attack vectors. For example, just one infected laptop that has IP connectivity to the back-end servers could spread by taking advantage of a vulnerability in an unpatched 3rd party application. Even unpatched security products from the top security vendors have ironically been used to infiltrate a server. Therefore, Kevin Mitnick style security awareness training is also recommended.

 

Disclaimer: This blog post is for educational use only. Both myself and my employer are not responsible for any actions you take or do not take as a result of reading this blog post.

An analysis of what is available in Azure RMS Usage Logging today

As a follow-up to my last blog post on “Configuring the Azure Rights Management Connector with a Windows FCI File Server,” RMS can log every request that it makes for your organization, which includes requests from users, actions performed by RMS administrators in your organization, and actions performed by Microsoft operators to support your RMS deployment.

There are three limitations with User Activity Logging. The first is that the log files does not include the document name that is being accessed. For example, RMS will log that an unauthorized user attempted to access a document with a content-id of {GUID} but unless you have access to the document with that {GUID} then you cannot correlate the content-ID to the document name. This presents a catch-22, how do you know which document to extract the content-ID from to begin with? (For a complete list of the log file contents, see Logging and Analyzing Azure Rights Management Usage on Technet).

I have to give the RMS team at Microsoft credit, because they are extremely responsive and interested in feedback. You can tell they really love this product and the success of the product means a great deal to them. They may soon release a powershell script that allows you to extract the content-id from each document, and then you could manually insert that into a SQL database that would contain a mapping of content-ID’s to document names. Keeping this SQL database updated would require a custom application to be written.

I am assuming that the content-id script will be posted to the RMS blog, Connect.Microsoft.com or the RMS Yammer group when it is made available, since that is the location of the last announcement of Azure RMS powershell scripts:
http://blogs.technet.com/b/rms/archive/2014/04/11/microsoft-protection-powershell-cmdlets-ctp2.aspx

It would be much easier if the RMS user activity log contained a direct reference to the full document path (not just the file name). Because a file name in itself is unique only within the directory it resides in, for example:

F:\Share\Bank1\Purchase Order.docx

F:\Share\Bank2\Purchase Order.docx

As you can see, only having ‘Purchase Order.docx’ added to the log is not sufficient during a forensic analysis. Technically you could extract content-id from all documents named Purchase Order and then compare that to the log, but again, that is not efficient.

So my hope is that when Microsoft adds detail to the log file, that they consider adding the full path and not just the file name. It would be even better if the path included the server name too, because otherwise you might have two servers in your organization like this:

Server 1 > F:\Share\Bank1\Purchase Order.docx

The contents of Server 1 are replicated via DFS to an off-site DR server named Server 2:

Server 2 > F:\Share\Bank1\Purchase Order.docx

So in this scenario, having DFS log the path without the server name would not tell you which server was trying to be attacked.

A second limitation to configuring a Windows FCI File Server with RMS is that it will only protect Microsoft Office file types. Although Azure RMS does have the ability with the RMS Sharing App to create “pfiles” – this functionality is not built into the Windows FCI File Server API, and there is no command-line version of the RMS Sharing App. So if you needed to automate the enforcement of all files on a file share, (including having RMS protect both Microsoft and Non-Microsoft file types), you could use the recently announced Microsoft protection powershell scripts (currently in Community Preview on Connect.microsoft.com) to create pfiles against non-Microsoft file types. You could also write your own .NET app using the Azure RMS SDK 2.1 with the File API). Writing a script to traverse a file structure to perform this and have it run as a scheduled task would take a decent amount of development effort. Hopefully the script could be written to apply the same Azure RMS Template that the FCI file server is using for consistency.

A third limitation has to do with automating the log file parsing. For example, if your organizational security policy requires that you are notified when an unauthorized access attempt occurs, then you would need to write a program to access the logs directly on Azure storage. There are currently two vendors who are writing software to provide this level of logging and you can contact me at Joe dot Stocker at CatapultSystems.com and I can introduce you. Otherwise the only out of box option now is to use a powershell cmd-let to download the log files and then manually open each log file to inspect them for unauthorized access. 

In Summary, the User Activity Logging that is available right now is sufficient for organizations that need to satisfy an audit requirements that unauthorized access attempts are logged somewhere. But outside of that narrow requirement, in practical terms, you would need to hire a company like Catapult Systems to write some custom code to alert you when unauthorized access takes place.

I would recommend that you ask the software developer to define the notification boundaries. For example, how do you define unauthorized access? Is it every time someone attempts to open a document that they do not have rights to access? Do you really care to be notified for failed attempts? Wouldn’t that fill up your inbox, and then you would start ignoring those emails? Or would you prefer to only be notify when an unsuccessful access attempt is followed by a successful access attempt (as this would indicate that a brute force attack was successful). Or perhaps you only care if greater than 10 access attempts occur rather than each individual one. As you can see, you will need to factor in some intelligence into whatever notification script you write yourself. My hope is that the commercial market will produce solutions that apply best of breed approaches to log forensics and notifications.

It would be awesome if Microsoft will add a report into Azure AD Premium for RMS logging analysis. Similar reports already exist, so theoretically it would not be too difficult for Microsoft to extend those rules into analyzing the RMS logs.

For example, here are the security reports included in the Azure AD base (free) followed by a comparison of what is available in AD Premium. The base offering has reports for:

  • Sign ins from unknown sources
  • Sign ins after multiple failures
  • Sign ins from multiple geographies

The Premium offering adds reports for:

  • Sign ins from IP address with suspicious activity
  • Irregular sign in activity
  • Users with anomalous sign in activity
  • Which users are most actively using an application
  • What devices a user has signed in from

Premium also offers email notification of anomalous behavior to Azure AD administrators. So what we (customers and partners) want is similar notifications for RMS activity logging for when documents are accessed using the same rules above. That should satisfy most audit requirements.

 

References

Logging and Analyzing Azure Rights Management Usage on Technet

http://technet.microsoft.com/en-us/library/dn529121.aspx

Configuring the Azure Rights Management Connector with a Windows FCI File Server

On March 4th, 2014, Microsoft announced the availability of integrating on-premise File Classification Infrastructure (FCI) file server with the Azure Rights Management service using the Azure RMS Connector.

http://blogs.technet.com/b/rms/archive/2014/03/04/windows-server-fci-file-classification-now-supports-azure-rms.aspx

“FCI refers to the File Classification Infrastructure, a capability in Windows Server-based File Servers using the File Server Resource Manager feature which enables the server to scan local files and assess their content to determine if they contain sensitive data, and if they do classify them accordingly by tagging them with classification properties you define. Once files are classified, FCI can also automatically take action on these files, such as applying adequate RMS protection to the files to prevent them from leaking beyond their intended audience. All this happens in the blink of an eye without the users having to take action” 

Note: Files can also be classified manually by modifying the properties of a selected file or folder. This is done on the server-side, or within a Windows 8 client system after a group policy has been applied (http://technet.microsoft.com/en-us/library/dn268284.aspx)

Prerequisites

  • FCI requires a Windows Server running 2012 or 2012 R2.
  • Azure RMS has been activated within your Office 365 Tenant.
  • Directory Sync with your o365 Tenant has been configured.
  • Users that need to work with the RMS documents have been granted the RMS license within the Office 365 Portal
  • Note: In my testing, the RMS Connector cannot be installed on the same server hosting the file share.

image

This walkthrough is for a stand-alone connector installation. For production deployments, a Hardware Load Balancer (HLB) and a minimum of two servers is recommended for high availability.

Download the RMS Connector here http://go.microsoft.com/fwlink/?LinkId=314106

Configuration Steps

Launch setup with Administrative Rights.

image

Enter your Office 365 Tenant Administrator Account information

 

image

 

image

Note: On the next screen, if you are deploying two servers for high availability, do not select Launch connector administrator console to authorize servers at this time. You will select this option after you have installed your second (or final) RMS connector. Instead, run the wizard again on at least one other computer. You must install a minimum of two connectors for HA.

image

To validate the installation, connect to http://<connectoraddress>/_wmcs/certification/servercertification.asmx, replacing <connectoraddress> with the server address or name that has the RMS connector installed. A successful connection displays a ServerCertificationWebService page.

image

Next, authorize the servers that can use the connector. As a best practice, create a group that contains these accounts and specify the group instead of individual server names.

image

Next, select the server role (ex: Exchange, SharePoint or an FCI Server)

image

Next, select an account used to authorize the selected role.

image

Note: It is important that you select computer accounts here, not user accounts. Best practice is to use a group rather than individual servers.

image

When finished adding servers, click close.

 

The next step will be to configure an SSL Certificate on the RMS Connector. To enable the RMS connector to use TLS, on each server that runs the RMS connector, install a server authentication certificate that contains the name that you will use for the connector. For example, if your RMS connector name that you defined in DNS is rmsconnector.contoso.com, deploy a server authentication certificate that contains rmsconnector.contoso.com in the certificate subject as the common name. Or, specify rmsconnector.contoso.com in the certificate alternative name as the DNS value. The certificate does not have to include the name of the server. Then in IIS, bind this certificate to the Default Web Site.

Configuring a Windows Server 2012 or 2012 R2 file server for File Classification Infrastructure to use the connector

Download the RMS Server Configuration Tool script (“GenConnectorConfig.ps1”) from here http://go.microsoft.com/fwlink/?LinkId=314106

Run this on the file servers that you authorized in the previous step. Set the powershell execution policy to allow the script to run if you have not already done so.

Get-help GenConnectorConfig.ps1 –detailed

GenConnectorConfig.ps1 –SetFCI2012 –ConnectorUri http://rmsconnector.contoso.com

After running the tool, Restart the File Server Resource Manager services, which refreshes the RMS templates on the server.

image

You can now Create classification rules and file management tasks to protect documents with RMS policies. See File Server Resource Manager Overview for more information.

For example, after the classification rules have been configured, you can Right-click a file in that folder, and then click Properties. Click the Classification tab, select the resource property you want to tag the folder and click the value, and click OK.

image

You can then create file management tasks to apply RMS protection to documents when the conditions have been met, example: when Department is Research and Development.

image

When this condition is met, on the Action Tab, you can select RMS Encryption and apply the template you would like to use.

image

Now when saving a document into that folder, it will automatically inherit the proper RMS Template.

image

By default, Azure RMS comes with two built-in templates, but you can configure your own through the Azure management portal http://manage.windowsazure.com

After creating a new template, Restart the File Server Resource Manager services, which refreshes the RMS templates on the server.

image

 

 

Next Steps: Configure RMS user activity logging

RMS can log every request that it makes for your organization, which includes requests from users, actions performed by RMS administrators in your organization, and actions performed by Microsoft operators to support your RMS deployment.

If you are only interested in the logging of administration tasks performed in RMS then you can obtain this with the Get-AadrmAdminLog RMS Windows PowerShell cmdlet. Otherwise, if you are interested how users are using RMS, you can use these RMS logs to support the following business scenarios:

  • Analyze for business insights.
    RMS writes logs in W3C extended log format into an Azure storage account that you provide. You can then direct these logs into a repository of your choice (such as a database, an online analytical processing (OLAP) system, or a map-reduce system) to analyze the information and produce reports. As an example, you could identify who is accessing your RMS-protected data. You can determine what RMS-protected data people are accessing, and from what devices and from where. You can find out whether people can successfully read protected content. You can also identify which people have read an important document that was protected.
  • Monitor for abuse.
    RMS logging information is available to you in near-real time, so that you can continuously monitor your company’s use of RMS . 99.9% of logs are available within 15 minutes of an RMS-initiated action.
    For example, you might want to be alerted if there is a sudden increase of people reading RMS-protected data outside standard working hours, which could indicate that a malicious user is collecting information to sell to competitors. Or, if the same user apparently accesses data from two different IP addresses within a short time frame, which could indicate that a user account has been compromised.
  • Perform forensic analysis.
    If you have an information leak, you are likely to be asked who recently accessed specific documents and what information did a suspected person access recently. You can answer these type of questions when you use RMS and logging because people who use protected content must always get an RMS license to open documents and pictures that are protected by RMS, even if these files are moved by email or copied to USB drives or other storage devices. This means that you can use RMS logs as a definitive source of information for forensic analysis when you protect your data by using RMS.

http://technet.microsoft.com/en-us/library/dn529121.aspx

 

References:

Deploying the Azure Rights Management Connector

http://technet.microsoft.com/en-us/library/dn375964.aspx#ConfiguringServers

The Storage Team Blog
http://blogs.technet.com/b/filecab/archive/tags/file+server+resource+manager+_2800_fsrm_2900_/default.aspx

File Server Resource Manager Overview

Hyper-V Replication between two workgroup servers

Enabling Hyper-V between two workgroup servers requires issuing self-signed certificates with makecert.exe and a registry key to bypass the revocation check.

The reason why makecert is required is because the certificate Enhanced Key Usage must support both Client and Server authentication, and the default IIS certificate CSR wizard does not include the client EKU.

Machine #1

1. Generate a root cert:
makecert -pe -n CN=PrimaryTestRootCA -ss root -sr LocalMachine -sky signature -r PrimaryTestRootCA.cer

2. Generate a self-signed cert from the root cert:
makecert.exe -pe -n CN=HV2 -ss my -sr LocalMachine -sky exchange -eku 1.3.6.1.5.5.7.3.1,1.3.6.1.5.5.7.3.2 -in PrimaryTestRootCa -is root -ir LocalMachine -sp “Microsoft RSA SChannel Cryptographic Provider” -sy 12 HV2.cer

3. Disable the revocation checking since that won’t work on self-signed certs:

reg add "HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Virtualization\Replication" /v DisableCertRevocationCheck /d 1 /t REG_DWORD /f

Machine #2

1. Generate a root cert:
makecert -pe -n CN=RecoveryTestRootCA -ss root -sr LocalMachine -sky signature -r RecoveryTestRootCA.cer

2. Generate a self-signed cert from the root cert:
makecert.exe -pe -n CN=HV1 -ss my -sr LocalMachine -sky exchange -eku 1.3.6.1.5.5.7.3.1,1.3.6.1.5.5.7.3.2 -in RecoveryTestRootCa -is root -ir LocalMachine -sp “Microsoft RSA SChannel Cryptographic Provider” -sy 12 HV1.cer
(Note: even though it outputs a .cer file, it automatically inserts into the LocalMachine certificate store, so there is no additional import step)

3. Copy the PrimaryTestRootCA.cer from Machine #1 and then run this command:  certutil -addstore -f  Root “PrimaryTestRootCA.cer”

4. Copy the RecoveryTestRootCA.cer from Machine 2 and then run certutil -addstore -f  Root RecoveryTestRootCA.cer

5. Disable the revocation checking since that won’t work on self-signed certs:

reg add "HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Virtualization\Replication" /v DisableCertRevocationCheck /d 1 /t REG_DWORD /f

6. Now you can select the self-signed certificate in replication on both servers.

image

Important: if you have windows firewall enabled, create an allow rule for TCP 443 on both servers:

netsh advfirewall firewall add rule name=”Https Replica in” dir=in protocol=TCP localport=443 action=allow

 

Credits to these two blogs for helping me figure this out:

http://jsmcomputers.biz/wp/?p=360  (<- The only problem with his blog is the quotes “” do not work in his command-line syntax, those need to be removed otherwise you get an error “CryptCertStrToNameW failed => 0x80092023 (-2146885597)”

http://blogs.technet.com/b/virtualization/archive/2013/04/13/hyper-v-replica-certificate-based-authentication-makecert.aspx

Office 365 Message Encryption

Today Microsoft announced the generally availability of Office 365 Message Encryption http://blogs.office.com/2014/02/19/office-365-message-encryption-now-rolling-out/

This is the replacement for Exchange Hosted Encryption (EHE).  Customers who are currently using EHE will be upgraded to Office 365 Message Encryption beginning in the first quarter of 2014.

The service promises “Send encrypted email messages to anyone, regardless of the recipient’s email address”

While this is by far the biggest selling point and improvement over RMS, it could also be a slight drawback because the recipient must have an Office 365 organizational ID or a Windows LiveID associated with the email address that you send to. Although not everyone will have a Windows LiveID, you can still send the encrypted email blindly and hope that they will click the link to enroll in a new Live ID account and then subsequently decrypt the email you send them. The enrollment process is pretty quick and straight forward (if you can sign up for an email account then you can figure out the Live ID enrollment process).

In contrast with RMS, the end-user doesn’t have direct control over which messages will be encrypted with a button. It is up to the Office 365 Administrator to enforce a rule that will apply the encryption. With some creativity, an Administrator can allow the user to indirectly control whether a message is encrypted by filtering on a key word. IMHO, this limitation is also a strength because it doesn’t have any client requirements, as all mail clients can immediately benefit from encryption whether it is enforced by an Admin, or permitted by an Admin using an indirect keyword.

“Easily Set up Encryption”

Yes, the setup is very fast and easy for an Office 365 Administrator. Simply create a new rule in the mail flow section of the Exchange Admin Center inside the Office 365 Portal:

image

Select ‘Apply rights protection to messages’

image 

There are dozens of ways you could apply encryption based on the variety of rules available. +1 for the flexibility here.

image

Here is an example of applying encryption to a message where the subject line contains ‘’Encrypted” (Perhaps this could be the key word you tell your users to use when they want to encrypt an email).

image

The next step is where you provide the rule action, ex: Apply Office 365 Message Encryption.

image

Next you can configure whether you want to enforce the rule immediately or whether you want to test it out first with a policy-tip notification (for OWA or the Outlook clients that support message tips).

image

The rules are particularly useful for automatically encrypting emails if they contain sensitive information such as a Social Security Number, Credit Card Number or Drivers License Number.

image

That’s it for the setup! Nice and easy, right? It’s impressive that such an awesome encryption technology is so easy to setup. You could have full blown message encryption in minutes!

Adding custom branding

You can add custom branding by connecting to Exchange Online powershell and using the get-OMEConfiguration and set-OMEConfiguration

image

 

So what about the end-user experience?

 

An email that bears an encrypted message arrives in the recipient’s Inbox with an attached HTML file that lets the recipient to sign-in to view the encrypted message.

image

When opening the HTML attachment, the user has the option to view the encrypted message. You can see evidence of the custom branding which can help give credibility to the message.

image

Recipients follow instructions in the message to authenticate by using a Microsoft account or an organizational account. If recipients don’t have either account, they’re directed to create a Microsoft account that will let them sign in to view the encrypted message.

After being authenticated, recipients can view the decrypted message and reply to it with the same style of an encrypted attachment, so the whole back and forth will remain encrypted.

For additional screen shots of the end-user experience, click here:
http://technet.microsoft.com/en-us/library/dn569287.aspx

 

How much does it cost?

For $2 per user per month you can get a complete solution for internal and external information protection: traditional Rights Management capabilities like Do Not Forward for internal users, plus the new ability to encrypt outbound messages to any recipient. If you already have an Office 365 tenant, you can try it for free by adding a trial license here:

http://office.microsoft.com/en-us/exchange/redir/XT104181503.aspx

The great news is that if you already have an E3 or E4  license, then you get Message Encryption for free!

(Note: The trial enables you to try Information Rights Management capabilities as well as the capabilities of Office 365 Message Encryption)

So do you still need Rights Management if you use Message Encryption? Perhaps… because Windows Azure Rights Management in Office 365 prevents sensitive information from being printed, forwarded, or copied by unauthorized people inside the organization. That is why Message Encryption is labeled a companion service with RMS, not a replacement.

If you are using Exchange 2013 on-premise, you can also benefit from Message Encryption by configuring Hybrid mailflow.

Thank you Microsoft! This is great stuff!!

 

The official site for Office 365 Message Encryption is here:

http://office.microsoft.com/en-us/exchange/o365-message-encryption-FX104179182.aspx

IT Pro’s can get more technical information about Message Encryption here:

http://technet.microsoft.com/en-us/library/dn569286.aspx

Recommendations for reducing the impact to mobile devices during a hybrid migration to Office 365

One of the features available during a Hybrid / “rich-coexistence” migration to Office 365 is the friendly URL redirect that users can receive if they try using the old OWA URL that was pointing to the old Exchange server.

However many people may not be aware that using that feature requires establishing a legacy namespace with Exchange 2007, as the Exchange 2013 cannot proxy all workloads for Exchange 2007.

image

Reference: http://michaelvh.wordpress.com/2012/10/09/exchange-2013-interoperability-with-legacy-exchange-versions/ 

For example, owa.contoso.com would be redirected to the Hybrid server, and then it could proxy the request to a back-end legacy Exchange server.

However, if it is more important to an organization that the impact to mobile phones is reduced, then you can leave the pre-existing namespace (ex: webmail.contoso.com) intact and instead introduce a new external namespace for the Hybrid server (ex: mail.contoso.com) so that it exists side by side with the previous environment.

The benefit is that users only have to change their mobile phone setting once.

The downside with this approach is after a user’s mailbox migration completes, if the user attempts to logon to the old Exchange OWA URL then they will get an error message that their mailbox cannot be found.

This can be mitigated with a communication plan that provides the new OWA URL so that users don’t try to go to the old URL. To me, that’s worth the tradeoff of having to make the user change their mobile device twice: once to legacy.contoso.com and a second time when their mailbox migrates to Office 365.

For example, among the things you can communicate to an end-user prior to their mailbox being migrated:

• You will get a pop-up to restart outlook

• You will be prompted to enter credentials, please use your email address as your username.

• The new URL for Outlook Web Access is http://outlook.com/contoso.com

• Follow these instructions for re-attaching your mobile device to Office 365
http://office.microsoft.com/en-us/office365-suite-help/set-up-and-use-office-365-on-your-phone-or-tablet-HA102818686.aspx

Introduction to Windows Azure Active Directory “Premium”

Windows Azure Active Directory (WAAD) “Premium” is a paid offering that unlocks additional features of WAAD. It is currently in preview and can be unlocked in the Azure Preview Portal.

[Update: WAAD reached General Availability on April 8, 2013 whereas WAAD Premium was available in Preview in December 2013, and GA sometime later [please post a comment if you have the GA release date of Premium]

WAAD Premium adds these features:

  • User self-service password reset –Give your end-users the ability to reset their password using the same sign in experience they have for Office 365.
    For more information, see Enable self-service password reset for users.
  • Group-based application access – Use groups to assign user access in bulk to SaaS applications. These groups can either be created solely in the cloud or you can leverage existing groups that have been synced in from your on-premises Active Directory.
    For more information, see Group management.
  • Company branding – Add your company logo and color schemes to your organization’s Sign In and Access Panel pages (including localized versions of the logo for different languages).
    For more information, see Add company branding to your Sign In and Access Panel pages.

    Additional security reports – View detailed security reports showing anomalies and inconsistent access patterns.

    Once you unlock this feature in the Preview Portal, then you sign into your Azure tenant and browse to the directory that you want to enable for Premium.

    image

    image

    This gives you the ability to customize branding. The branding is shown when users access webmail via outlook.com/contoso.com or mail.contoso.com. For more information on branding see Alex Simon post here: http://blogs.technet.com/b/ad/archive/2013/12/16/custom-branding-support-in-azure-ad-now-in-preview.aspx

    SNAGHTML6cee805

    Note: During the previous period, users will need to Opt-In by clicking on this link to view customized branding https://login.microsoftonline.com/optin.srf 

    The Advanced Reports seem like they would be relevant for most security administrators to review periodically.I predict what feature request is coming next: Alerting or scheduled emails of these reports =)

     

    image

     

    And it also unlocks the password reset feature. Right now this is an all or nothing toggle, however, the technet page for this feature says that the ability to enable this for specific users is coming soon.
    image 
    image 

     

    To perform a self-service password reset

    1. Go to a page that uses an organizational account. For example, go to portal.microsoftonline.com and click Can’t access your account link.
      image

    2. On the Reset your password page, enter the user ID and captcha
      image

    3. If the account is on-premise only (ADFS) then the following message will appear:
      image

    4. Otherwise, for cloud accounts then the user will receive notification.

    image

  • 802.1x Wireless Authentication differences in Windows 7 and Windows

    Rolling out WPA2/Enterprise and all Windows 8 clients could connect fine but Windows 7 clients could not connect. Client side errors in event viewer logged Event 8002 (Reason Code 16)  “authentication failed due to a user credentials mismatch” and on the Windows NPS Server Event 6273 “Authentication failed due to a user credentials mismatch.”

    Both errors are bogus because the username and password are correct.

    Client computers can be configured to validate server certificates by using the Validate server certificate option on the client computer or in Group Policy. If this box is unchecked, then Windows 8 clients honor that and they will not inspect the NPS server’s certificate. However, Windows 7 clients are either more strict or there is a bug because they will not authenticate if the subject name field is blank in the NPS server’s certificate, even if this check box is unchecked.

    The fix was to roll out the RAS and IAS Server template in Certificate Authority per this technet article: http://technet.microsoft.com/en-us/library/cc754198.aspx 

    This is because other certificate templates might get deployed that use Server authentication in the EKU which makes it seem like the cert should work fine for NPS but the problem is they may lack a value in the subject name field of the certificate. This is what generates the bogus errors about username and password mismatch. It would have been nice if the errors had said “hey, the SSL cert on your server is missing a subject name. go fix that!”

    A few helpful netsh commands to troubleshoot wireless:

    netsh wlan show profiles

    netsh wlan show profile <profile name>

    netwsh wlan set tracing mode=yes   (try to reproduce the issue then issue the same statement with =no)  This will create a .CAB file with tons of good information, especially the report.html file inside the .CAB file

    How to Manage Azure with On-Premise Active Directory

    When you sign up with a Windows Azure account, by default it creates an instance of Active Directory that resides in Windows Azure only called Windows Azure Active Directory (WAAD).  This is the same exact infrastructure that underlies Office 365. This blog post describes how to change Azure to leverage your existing Office 365 WAAD Instance.  You can then take advantage of your existing DirSync and ADFS servers to sign into the Azure Management Portal rather than using a Microsoft Account (Formerly Windows Live ID).

    This is ideal for large enterprise customers who desire to have all authentication performed from Active Directory, so that if administrators leave the organization, they have one place to disable the account rather than multiple places.

    For a quick 10 minute video overview of how this works, I recommend watching ”What is Windows Azure Active Directory”

    The first step is to sign into the Windows Azure Management Portal:

    https://manage.windowsazure.com

    Then click on Active Directory from the left navigation menu,  and then click Add.

    SNAGHTML10bff2d

    You then choose ‘Use existing directory’

    image

    Then check the box ‘I am ready to be signed out now’

    image

    You will then be directed to a login page to sign in with your Office 365 organization ID (which should authenticate you with ADFS if you have that enabled).

    If you are managing your Windows Azure Subscription with a Microsoft Account (Formerly Windows Live ID) rather than an Organizational ID, then you will be prompted for confirmation that you are okay granting your Microsoft Account (Formerly Windows Live ID) with Organizational Admin rights over your Office 365 directory.

    The next step is to click on the Settings icon on the left navigation pane in the Azure Management Portal.

    image

    Then click on the subscription you want to change the directory to the new o365 WAAD directory.

    image

    You can then change the directory

    image

    Note: The behavior of this screen is a little different than what you may expect. For example, in the drop-down box I was expecting to see a list of all my directories and then I could select the one I wanted. Instead, it assumes you don’t want to select your existing directory and so that option won’t be listed.

    Adding an Administrator

    Adding an administrator is the same as before but now you have the option of selecting the Organizational ID as an option.

    SNAGHTML1a45539

    That’s it – you can now sign in using ADFS to manage Azure.