Monthly Archives: March 2015

Converting distribution groups to the new Office 365 “Groups”

In a previous blog post, I wrote about the value of the new Office 365 “Groups.” These are a next generation type of group that replaces the function of a traditional distribution group, and includes the benefits of a security group, along with many other rich collaboration experiences. For example, they offer a shared calendar, shared files via OneDrive, shared OneNote, and a group chat experience in OWA. You can use these groups for Azure AD SSO, and the new March preview of AAD-Connect will dirsync these groups to on-premises.

See: Office 365 “Groups” are next generation distribution lists
and Upgrading Dirsync to Azure Active Directory Connect Public Preview – March 2015 update

I was inspired to write this post after reading my colleague’s post on how to update the primary SMTP address:

Basically, when a new “Office 365 Group” is created, it gets stamped with an address, for example: [email protected]

In Keif’s blog post above, he demonstrates how to use Exchange Online remote powershell to update the address to match the vanity domain name, ex: [email protected]. This improves aesthetics and mail routing.

  • Obtain a list of existing Office 365 Group mailboxes


      • Use the following one-liner to update the primary SMTP address

          Set-GroupMailbox –Identity Name PrimarySMTPAddress (Insert primary domain here)

        Keif also posted a powershell script to read from a CSV file and convert the groups to the new SMTP format. Awesome!

        So this solves one part of the conversion, which is to get the groups to use the shorter SMTP domain format.

        What about the overall process itself? Let’s say you have 100 distribution groups today and you want to convert them all to Office 365 Groups? How would you go about doing this?

        Approach #1 – Create a new O365 Group and then add the existing DL as a ‘member’

        Approach #2 – Create a new O365 Group and then delete the old DL. Inform users to start using the new Group.

        Approach #3 – Create a new O365 Group, delete the old DL and then update the new O365 Group to use the old DL’s SMTP name, or add it as a secondary proxy alias

        There are tradeoffs with each approach, but in general you want to select the approach that prevents NDR’s from occurring, and you want to make sure to automatically subscribe the members of the old DL to the new O365 Group so that they don’t have to manually take any action in order to start receiving new emails from the group. In a future blog post, I will walk through the end to end process.

        Update 5/20/2015: If you take approach #3,  I now recommend leaving the primary SMTP address as the address, and adding the old DL as a secondary proxy address. The reason for this is because the new Office 2016 Office Client will not display these Groups if the primary address is not an domain name.

        VM level backups now available in Azure Backup

        As far as Azure IaaS goes, this is the biggest improvement to the platform since the ExpressRoute offering.

        The announcement is here, and I highly recommend reading it:

        The highlights:

        • With Azure Backup, you can now get application consistent backup of Windows VMs without having to shut down the VM.
        • “In order to backup IaaS VMs, the customer needs to deploy absolutely nothing”*
          Note: This is accurate insofar as you have the Azure VM Agent installed (see Prerequisites below)
        • Azure Backup truly achieves “set-and-forget” for VM backups.
        • Azure Backup does additional processing to determine the incremental changes between the last recovery point and the current VM state. By transferring and storing only the incremental changes, Azure Backup is highly storage efficient.

        Azure VM Agent Prerequisite

        • A very important prerequisite is that the Azure VM Agent must be installed. This is performed when the VM is first created, but if you uncheck the box to install the agent, then you will not be able to back it up with the new VM level backup feature.
        • If you do not have the Azure VM Agent installed, you will get an error message during the registration job step:
          ”Failed to install the Azure Recovery Services extension on the selected item. VM Agent is a pre-requisite for Azure Recovery Services Extension. Please install the Azure VM agent and restart the registration operation.”
        • You can manually download and install the VM Agent if it is not installed on the VM, see this article for more information:

        • The VM agent itself can be downloaded directly from (here) and is very small and takes seconds to install.
        • After manually installing the agent, it is necessary to set the ProvisionGuestAgent value to true using Powershell or a REST call. If you do not set this value after manually installing the VM Agent, the addition of the VM Agent is not detected properly.)
        • See my blog post for manually installing the VM agent for step by step instructions:

        Seeing it in Action!

        Assuming you have already setup your recovery vault, and your VM’s have the VM Agent installed, then there are three easy steps to start backing up VM’s in Azure IaaS


        After clicking on ‘Discover Virtual Machines’ you then click on the Discover button at the bottom of the screen.


        After discovery completes, you then click on the Register button.


        This brings you to a screen to select the VM’s that you want to protect.


        Clicking on the checkmark will spawn a register job that can be viewed on the Jobs tab. In my case, this job took 4 minutes to run.


        Now that we have a VM registered, the next step to perform is step number 3, “Protect Registered Azure Virtual Machines.



        Select the item you want to protect


        You can then select an existing policy or create a new policy


        When the backup has taken place, you can view it under protected items.


        You can force a new backup or you can click on Restore to bring the VM back to life as a new VM standing next to the old one (it does not overwrite the existing VM).


        Restoring a backup

        A backup is only “good” if you can verify it by performing a restore. Until then, you should not trust your backups. I have learned this the hard way in the trenches =)

        It is interesting that when you restore a VM, it does not overwrite the existing VM, but it instead deploys it alongside the current VM.


        You can check the Jobs tab to see how long the restore will take. In my case, the restore took 23 minutes.


        When the restore job completes, you can view the job notes:


        To view the restored VM, I had to sign out and back into the Azure Management portal, but then I saw the restored VM amongst my other VM’s:


        This raises a practical operational question where you need to be sure to shut off the old one before the new one starts up otherwise in a batch environment you wouldn’t want two VM’s running the same batch (you get the idea, you need to know what your VM’s are doing and coordinate properly).

        Therefore, it would be good to have an option during the restore process for Azure to automatically shut down the original VM on your behalf to tighten-up the handoff, as you want to avoid having two machines with the same computer name and SID running on the network.

        For example, in my restored VM, I can see it still has the original computer name (as I would expect) and so even though the name of the VM in Azure shows as ‘MyRestoredVM’ the actual computer name maintains the original name. (This is okay behavior, but just remember we need to shut off the original VM now too). I posted this feedback on the Azure Feedback portal, please click (here) to vote if you agree and would like the Azure Product team to include this feature in a future release.


        Manually install the Azure VM Agent


        When you first deploy a Virtual Machine to Azure using the Gallery, you have the option of installing the VM Agent. You should always leave this box selected because it adds tremendous value.


        VM extensions can help you:

        • Modify security and identity features, such as resetting account values and using antimalware
        • Start, stop, or configure monitoring and diagnostics
        • Reset or install connectivity features, such as RDP and SSH
        • Diagnose, monitor, and manage your VMs
        • Backup your VMs with the new Azure VM level Backup feature (see my blog post here for more information on that feature announced March 26, 2015).

        The agent allows you to add extensions to the VM, for more information, see this article:

        You can manually download and install the VM Agent if it is not installed on the VM, see this article for more information:

        The VM agent itself can be downloaded directly from (here) and is very small and takes seconds to install.

        After manually installing the agent, it is necessary to set the ProvisionGuestAgent value to true using Powershell or a REST call. If you do not set this value after manually installing the VM Agent, the addition of the VM Agent is not detected properly.) The following code example shows how to do this using PowerShell where the $svc and $name arguments have already been determined

        $vm = Get-AzureVM –serviceName $svc –Name $name
        $vm.VM.ProvisionGuestAgent = $TRUE
        Update-AzureVM –Name $name –VM $vm.VM –ServiceName $svc

        Before you can run the powershell commands above, you first need to install the Azure Active Directory powershell module and then run the add-azureaccount command. Then to make sure you can see your VM’s, you can run get-azurevm.

        In this screenshot you can see I have two VM’s, one with the agent installed and one without it.


        So after manually installing the VM Agent, I can then run these commands to update Azure so that it knows the agent has been installed inside the guest VM:

        $vm = Get-AzureVM –serviceName TCTCloudService –Name MyBackupTest
        $vm.VM.ProvisionGuestAgent = $TRUE
        Update-AzureVM –Name MyBackupTest –VM $vm.VM –ServiceName TCTCloudService


        Now when I check for guest agent status, I can now see the same for both VM’s:


        Upgrading Dirsync to Azure Active Directory Connect Public Preview – March 2015 update

        In this blog post I am going to review the upgrade process of Dirsync to the new AAD-Connect. The March 2015 preview now makes it possible to perform an in-place upgrade from Dirsync to AAD-Connect. This entire process took 30 minutes for me in my lab environment, but your performance and time may vary because I am running a small environment on SSD hard disks =).

        Important: You must read the “Azure AD Connect Public Preview 2 Readme” file – there are too many requirements and prerequisites in that Readme file to summarize in this blog post, so please do not skip that reading.

        I also recommend reading the “New Sync features in Azure AD Connect Public Preview 2.docx”

        You can download the AAD-Connect March Preview, Readme file, and the New Sync Features document from the Connect site here:



        After the prerequisites are installed, AAD-Connect detects Dirsync and will now upgrade it in-place:


        Next, I am prompted to enter my Azure AD Administrator Credentials


        Lastly, I am ready to click Install. I recommend unchecking the box to start synchronizing after install. You can always start it manually later when you are ready.


        This part of the wizard took about 10 minutes to uninstall Dirsync and install AAD-Connect.


        Next, I clicked on the icon on the desktop “Azure AD Connect”


        I then signed in.



        I then type in an Enterprise Admin account


        If I want to connect to an additional forest, I can do that here:


        Next, I select that I want to enable password writeback. You will notice that user, group and device writeback are greyed out and not selectable. This is because we have not yet run the AD preparation steps necessary to enable those features. See the bottom of this blog post for details on enabling those features.


        And one last confirmation by clicking Install


        And one last confirmation that the installation was successful


        Enabling User, Group and Device Writeback

        In the readme file, it describes the powershell commands to run that will enable this enhanced functionality.

        For group writeback, your on-prem Exchange server must be running Exchange 2013 CU8. Also, the default Sync Rules will not add the address book attribute. Find the value from your Exchange server and add this as a custom attribute flow.

        The Initialize-ADSyncDomainJoinedComputerSync Function will initialize your Active Directory forest to sync Windows 10 domain joined computers to Azure AD. This function will need to be run on each forest to allow Windows 10 computers to authenticate against ADRS.


        The Initialize-ADSyncDeviceWriteBack Function will initialize your Active Directory forest for write-back of device objects from Azure AD to your Active Directory. It will also set up the necessary permissions on the AD Connector account. This only needs to be run on one forest even if AzureADConnect is being installed on multiple forests.

        Note: I received errors when running this command.

        The Initialize-ADSyncGroupWriteBack Function will initialize your Active Directory forest for write-back of group objects from Azure AD to your Active Directory. It will grant permissions to an AD Connector account for modifying objects in a pre-existing group WirteBack container. Please use this same container for Group WriteBack when you run the wizard. This function only needs to be run in one forest.

        I created a new organizational unit for these objects called “CloudUsersAndGroups”


        Initialize-ADSyncGroupWriteBack -GroupWriteBackContainerDN “OU=CloudUsersAndGroups,DC=thecloudtechnologist,DC=com”


        The Initialize-ADSyncUserWriteBack Function will initialize your Active Directory forest for write-back of user objects from Azure AD to your Active Directory. The users will be created with a random password so you have to reset the password in ADDS for the user to actually be able to login.

        Initialize-ADSyncUserWriteBack –UserWriteBackContainerDN “OU=CloudUsersAndGroups,DC=thecloudtechnologist,DC=com”


        Note: The Azure AD Premium feature password writeback does not work for users configured for user writeback. In other words, if you have a cloud identity, and that user is synced to the on-premises AD, then the password writeback feature will not update the newly created on-prem AD account version of the cloud identity user. I assume it would still reset the cloud identity portion.

        After running these commands, I went back to the wizard but the options were still greyed out. This may be because my AD Schema is not running Exchange 2013 CU8, so I will update my schema and then update this blog post after that is done.


        Next, read how to configure Azure AD Password Write-back on MSDN (I recommend reading all seven (7) articles under ‘Password Management’ 


        In the Azure AD Tenant, I enabled the toggle “USERS ENABLED FOR PASSWORD RESET”


        And when I scroll down, I now see that Password write back service status is ‘Configured’


        What does the user experience look like for self service password reset?

        Typically, the user will click on the “Can’t access your account” link below the Office 365 sign-in page at


        Otherwise, they can also bookmark directly to the self-service password reset page:


        They will be prompted to authenticate with text message, email or phone call. You can configure which of these options you want the user to enter. The user can also register for self service password reset and populate this contact information in advance, or an administrator can pre-populate it (again, please read the MSDN articles above for more details).


        The user can then select the new password which must conform to the on-premises password policy.


        Controlling Access to Application Proxy (Optional)

        This is a follow-on post from my post on Azure Application Proxy. Assuming that you have published your first application via the Azure Application proxy, you may now want to secure it with Multifactor authentication.

        You can enable access rules to limit access to the application you publish with Application Proxy to specific groups, you can require multi-factor authentication, or only require MFA when the user is outside a specific network location (external IP address of NAT firewall).


        The first time MFA is enabled for an application published by Azure Application proxy, the user will be required to enroll in MFA.


        After enrolling, the user will be sent a text message or a phone call to their phone number registered in AD. Then they will logon to the application with two forms of authentication (password + phone call or text message).

        The next time they browse to the application, after authenticating with their username and password, the application will automatically send them a text message and they can then sign in after entering the SMS code sent to their smart phone.



        The intranet application hosted internally at https://intranet will then load up fine.


        Azure Application Proxy services


        Azure AD Application Proxy (AAD-AP) is a type of reverse proxy solution that enables access to web-based applications that exist on a corporate LAN, secured behind a corporate firewall.

        The benefits of using AAD-AP rather than using a traditional firewall to expose an application to external access are (1) the convenience of listing the application in the user’s Office 365 menu choices (see first screen shot below) or the Azure Access panel and (2) the enhanced security of preauthentication using Azure Active Directory and the option of enhancing security with Azure Multifactor Authentication. These later two security enhancements were previously provided by solutions such as Microsoft ISA or TMG.

        The convenience that users benefit from is having one place to access all internal web-based corporate applications, as well as over 2,400 3rd party SaaS applications. In the screen shot below you will see that amongst the Office 365 applications list, I have also configured single sign on for Facebook, Google Docs, ADP, Salesforce, and more. It is very convenient to logon once to Azure Active Directory or Office 365, then launch other applications without having to logon to those applications individually.  This amounts to huge time savings and it is really nice not having to remember 10 separate usernames and passwords! This now puts Azure AD on par with other hosted identity providers such as Okta, Onelogin or PingFederate.

        This blog post is a review of AAD-AP, a component of Azure AD Premium and Azure AD Basic.

        AAD-AP exposed application ‘My Intranet’ in Office 365

        If you don’t have Office 365 you can also use the Microsoft Azure access panel to achieve SSO (as shown below).

        image (this redirects you to )

        The 3rd way of accessing internal applications through AAD-AP is a direct hyperlink. This is provided after configuring the application in the Azure management portal.
        For example: 
        The convention is: (Application Name – Azure Tenant Name – (similar in concept to the O365 tenant name “”)
        Custom domain names are coming soon, so you will be able to have the AAD-AP name in front of your own domain name, ex: 



        Application Proxy Prerequisites

        • You must have an Microsoft Azure administrator account. If you don’t’ have one, you can get a 30 day trial here.
        • You must have an Azure AD Premium license. For more information, see Getting Started with Azure AD Premium. You can also get a 30 day trial to evaluate this as well.
        • A server running Windows Server 2012 R2 or Windows 8.1 or higher on which you can install the Application Proxy Connector. The server must be able to send HTTPS requests to the Application Proxy services in the cloud, and it must have an HTTPS connection to the applications that you intend to publish.
        • The server running the connector must be able to make outbound connections to this domain and subdomain: on the following TCP/IP port ranges:
          443, 20200-20210, 9352, 10100-10120, 8080, 9090 and 9091. For a description of what these ports are used for, see :
            • There are no inbound ports required, because Azure Application Proxy service (ApplicationProxyConnectorService.exe)  initiates a reverse tunnel from the VM out to Azure.
        • If the web application requires windows integrated authentication, then the machine where the connector is installed must be joined to the domain.
        • For windows integrated authentication, The UPNs in Azure Active Directory must be identical to the UPNs in your on-premises Active Directory in order for preauthentication to work. Make sure your Azure Active Directory is synchronized with your on-premises Active Directory.
        • For accessing applications remotely, the application that you are proxying cannot send the user any 302 redirects otherwise those will most likely contain internal server names that are not accessible internally. See the scenario below that happens when I first tried to publish the Lync control panel. The fix for the Lync scenario was to publish the actual path to the CSCP virtual directory. This is thanks to the new Path Publishing feature that was announced on March 11th, 2015 on the Azure Application Proxy blog here.
        • In my testing, AAD-AP worked with Windows machines running Internet Explorer or Chrome, iOS devices and Mac OSX (Safari and Chrome). The official support statement says that the Access Panel Extension is available for Internet Explorer 8 and later, Chrome, and Firefox browsers.

        Microsoft AAD Application Proxy Connector

        The connector is a small 4 MB file that can be downloaded from the Azure Management Portal when configuring a new application that relies on Application Proxy.

        You should install the connector software on a VM that has HTTP/S access to the application that you want to publish, and outbound access to Azure.

        Installing the connector is quick and painless. The only information you are prompted to provide is just the Azure tenant administrator username and password.





        It installs two services, the main service runs under network service because it must be able to interact with other servers to perform the reverse proxy, and optionally to perform Kerberos delegation for web based applications that required windows authentication.


        Configuring the Connector for Windows Integrated Authentication

        Let’s say you want to publish the Lync Control Panel to through the Azure application proxy.

        Since the Lync Control Panel requires Windows Integrated Authentication, we need to configure the Active Directory Computer object for delegation. For step by step details, see this Microsoft article, otherwise I will cover the high level steps here.

        I did not have a service principal name (SPN) for my Lync simple URL for administration, I had to create it first. Before creating an SPN, it is good practice to check to see if it exists first. You can query to see if an SPN exists with the ‘setspn’ command that you run in a regular command shell on any server (domain admin privs are required).
        setspn -Q http/
        Note: The SPN format is a little odd because there is a single forward slash, whereas you would normally expect to see a colon followed by two slashes.

        To create the SPN, I ran this command:
        setspn –S http/ tctfe01  
        (Note: The Lync ‘standard edition’ front-end server name was TCTFE01. For Enterprise editions, you will use a service account rather than an individual computer name). You can use the –Q or –L parameters to verify that it registered correctly.

        After the SPN has been created, the next step is to configure the Active Directory Computer object for delegation. Find the computer object for the machine where you installed the Connector software in Active Directory Users and Computers. In my case, my connector was installed on a server named “HV1” so I found that object and on the delegation tab, I added the computer name of my Lync standard edition front end server “TCTFE01.” For Enterprise Edition deployments, you would configure delegation on the relevant service account instead.


        This allowed me to find the SPN I had created for the SPN record


        Now you can add the SPN into the Azure Management portal:


        You should now see the Lync Control panel in the Office 365 list of applications.

        What I found out next helped me to understand more about the features and limitations of Azure Application Proxy.

        When I published the simple URL for the Lync control panel (‘’) I found that it was working internally but it failed to work externally. After some fiddler analysis I discovered that it was working internally because Lync web services sent an HTTP 302 Redirect to the internal DNS host name of the Lync standard edition front-end server, along with the virtual directory of the control panel, ex: (‘’). Since the internal host name is not published in external DNS, nor is there a firewall rule permitting this, it failed externally.
        So Azure Application Services is faithfully sending the 302 redirect back to the external user, so there was nothing broken, per se. This shows one requirement to be aware of – check to make sure that the application you want to proxy does not use 302 redirects.

        HTTP/1.1 302 Redirect
        Content-Length: 168
        Content-Type: text/html; charset=UTF-8
        Server: Microsoft-IIS/8.0 Microsoft-HTTPAPI/2.0
        X-Powered-By: ASP.NET

        <head><title>Document Moved</title></head>
        <body><h1>Object Moved</h1>This document may be found <a HREF=””>here</a></body>

        The workaround that I found was to use the new path publishing feature (Thanks Microsoft Application Proxy team!!). This enabled me to publish the full path where the 302 redirect was sending the user to (‘’). This can only be setup when the application is first configured, so if you forgot to add /cscp in the beginning, you will need to delete the app and start over.


        In my case, I didn’t have /cscp/ from the beginning, so I had to delete my app and start over like this:


        There was already an existing SPN record, so I did not have to create another SPN for, but I did have to configure computer delegation on the HV1 computer account for http/ instead of http/

        So even though the internal dns name of my Lync server is not published externally, this shows the power and benefit of Azure Application Proxy because I can now access the Lync administrative control panel from anywhere.




        1. The connector software is not highly available, so make sure to install it on a virtual machine that benefits from high availability at the hypervisor layer (ex: vmotion or live migration).

        2. Since this is such a new service, I am not sure what the scalability and performance of the connector service would be in a production environment. So you would need to perform your own performance testing. Consider using Azure’s Visual Studio Cloud Load Testing

        3. In the next blog post in this series, I describe the new conditional access feature using Azure’s multi-factor authentication. See Controlling Access to Application Proxy (Optional)

        Shrinking SQL Log files in an Availability Group Cluster or Database Mirror

        A very common problem that I see time and time again is the Log file growth of Microsoft SQL Server .LDF files.

        This problem can cause service outages when a hard disk is filled up completely by these massive LDF files.

        The problem happens when a SQL Server Database is configured for Full Recovery mode (often the Default). In Full Recovery mode, the SQL Log files (.LDF files) must be backed up themselves, in addition to backing up the SQL Database. Many people get confused and think they only have to backup the SQL Database file.

        Solving the problem

        Ideally, you should start backing up the SQL Log files. They are there for a reason, and full recovery mode is awesome because it allows you to restore a database to a point in time, specifically, to the point in time that you backed up the SQL database + and then the last SQL transaction log backup. So if you perform a SQL Full backup at 8pm nightly, and a SQL transaction log backup the next day at 12:00 Noon, then you can restore to any point in time up until 12:00 Noon.

        If your recovery point objective (RPO) allows you to lose up to a day’s worth of data, and you are okay with restoring only to the previous night’s full backup, then by all means, change your database recovery mode to Simple and avoid the hassle of backing up the SQL Transaction logs altogether!

        You might say, wait, this is all well and good, but I have a problem right now that I am trying to solve. My .LDF files have filled up my hard disk, and I need to clear them out now! First, before you proceed, it is important to understand why the logs are growing, otherwise you may find yourself having to continuously repeat this procedure. Log growth is normal when there are lots of write transactions into the database. The solution is to backup the transaction logs more frequently so that they are logically truncated, and that can prevent the physical file from growing too large.

        First, find out if your database is in an Availability Group or a Database Mirror. Because your options are limited in this case. If your database is not in an AG or DM, then just switch the recovery model to simple, shrink the Log file using SQL Management Studio, then if needed, switch the recovery model back to full.  This method is the quickest, but  you lose the ability to restore to a point in time from the last full backup, so perform this at your own risk. In fact, all advice on this blog is for educational purposes, and I provide no warranty, and I assume no responsibility if you follow any of my advice. =)  If you have available disk space,  you can always backup the SQL transaction log first before performing switching the recovery model from full to simple.

        Okay, so assuming you need to shrink a log file that is in an AG or DM, then the only method I have found that works is to perform the following (again, use at your own risk):

        1. Identify the culprit log files by running this query in SQL Management Studio:

        In my case, this showed two databases with log files > 65GB.


        2. Next, backup the Log file to free up space within the file (logically/virtually). Ideally, if you had enough disk space, you would backup the log file to an actual file somewhere. Otherwise, if you are okay with an RPO of 24 hours (to your last full backup) then you can backup to a null device (great blog article here describing this method, please heed the disclaimers).

        BACKUP LOG myDatabaseName TO DISK=’NUL:’

        Note: Technically you should be able to run this command against the primary replica or the secondary replica, and the log file will be truncated in both places according to this blog article.

        3. Next, verify if the log file is in a state that will allow shrinking. If your status is ‘2’ then you will need to proceed to step 4, otherwise if the Status is ‘0’ (Zero) then you can skip to step 5.

        Use myDatabaseName
        dbcc loginfo


        4. This step will reset the log file so that you can physically shrink it in step 5. Again, this step assumes that you are okay with a 24 hour RPO as you will only be able to restore to your last full backup.  I’ve worked with enough DBA’s that if I don’t add these disclaimers at each step then they will certainly spam the comments with ‘don’t ever do this step’ =)

        DBCC SHRINKFILE (myDatabaseName_Log, EMPTYFILE);

        Next, re-run step 3 (dbcc loginfo) and verify that Status is now 0 instead of 2. If it is, then proceed to step 5, otherwise re-run step 2 and 4.

        5. Now that the transaction log has been backed up, and emptied, it is now possible to physically shrink the size of the log file on disk with this command:

        DBCC SHRINKFILE (myDatabaseName_Log, 500);   –This would physically shrink the database size to 500 Megabytes.

        Important: you can only shrink files against the primary replica. The good news is that once you shrink the primary, the physical size of the secondary replicas will shrink too, so you only need to do this in one place.

        Hint: dbcc opentran shows if there are open transactions that could block the shrink operation.

        Hint #2: If the log files still will not shrink, check to make sure that the secondary replica database is not marked as Suspect. In that case, you will need to manually remove the suspect database from the secondary first before the shrink operation will work.

        Note: Before determining the size of 500Mb to shrink to, you may want to consider how much of the log file is in use, otherwise the shrink operation will not work. Also, you may want to consider allowing the size of the log file to be 25% of the size of the physical database file (.MDF) because otherwise when log growth happens, the database operations will block all active transactions and that will cause latency within applications (imagine users complaining).

        You can determine how much of the log file is in use by running this query:

        Use myDatabaseName

        SELECT name ,size/128.0 – CAST(FILEPROPERTY(name, ‘SpaceUsed’) AS int)/128.0 AS AvailableSpaceInMB
        FROM sys.database_files;

        So to determine the size of the log file to shrink to, subtract the “AvailableSpaceInMB” from the physical database size reported by the command: DBCC SQLPERF(LOGSPACE);. Then add some cushion so that future physical log growth does not block transactions from occurring.

        Azure AD Federated SaaS Apps

        Thanks to Aaron Smalser on the Microsoft Azure product team, I was pointed in the direction of this page that allows me to see the list of Azure AD pre-integrated apps that support SAML or WS-Federation. Thanks Aaron!

        As of this writing on 3/11/15, there are now 105 applications that support SAML or WS-Federation with Azure AD. That is double what was available on September 2nd 2014 (per Alex Simon’s blog article here). 

        So out of the 2,460 total applications available in the gallery, or 2,355 that support “Password Single Sign-On” which is what I would describe as password vaulting, and 105 support federated sign-in. The latter is the most optimal experience in my opinion.

        So in the 191 days between 9/2/2014 and 3/11/2015, Microsoft has added 55 new applications that support SAML or WS-Federation. That is about 1 new application every 3.5 days. That is impressive!

        1    15Five
        2    Abintegro
        3    Adaptive Suite
        4    Adobe EchoSign
        5    AnswerHub
        6    AppDynamics
        7    ArcGIS
        8    Ariett Purchase & Expense
        9    Ariett Touch
        10    AvePoint Meetings
        11    BambooHR
        12    Bime
        13    BlueJeans
        14    Bonusly
        15    Boomi
        16    Box
        17    Canvas
        18    Central Desktop
        19    Cisco Webex
        20    Citrix GoToMeeting
        21    Citrix ShareFile
        22    Clarizen
        23    ClickTime
        24    CloudBees
        25    Colibri
        26    Concur
        27    Cornerstone OnDemand, Inc.
        28    Coupa
        29    Docusign
        30    Dream Broker Studio
        31    Dropbox for Business
        32    e-Builder
        33    Egnyte
        34    Envoy
        35    EventBuilder
        36    FreshDesk
        37    Freshservice
        38    Gigya
        39    Google Apps
        40    Greenhouse
        41    Huddle
        42    IdeaScale
        43    Infolinx
        44    Innotas
        45    InsideView
        46    Intacct
        47    ITRP
        48    Jitbit Helpdesk
        49    Jive
        50    Kintone
        51    Kontiki
        52    Kudos
        53    LogicMonitor
        55    Mimecast Admin Console
        56    Mimecast Personal Portal
        57    Mindflash
        58    Mozy Enterprise
        59    MyDay
        60    NetDocuments
        61    New Relic
        62    OfficeSpace Software
        63    PagerDuty
        64    Panopto
        65    Panorama9
        66    Picturepark
        67    Projectplace
        68    Rally Software
        69    Replicon
        70    RunMyProcess
        71    Salesforce
        72    Salesforce Sandbox
        73    Samanage
        74    Sciforma
        75    ScreenSteps
        76    ServiceNow
        77    ShiftPlanning
        78    SmarterU
        79    Smartsheet
        80    SpringCM
        81    SuccessFactors
        82    SumoLogic
        83    SumTotalCentral
        84    Syncplicity
        85    TalentLMS
        86    Team Org Chart
        87    TeamSeer
        88    ThirdLight
        89    Thoughtworks Mingle
        90    ThousandEyes
        91    Timestamp
        92    UserVoice
        93    Wikispaces
        95    Workday
        97    xMatters OnDemand
        98    Zendesk
        99    Zoho Mail
        100    Zoom
        101    Zscaler
        102    Zscaler Beta
        103    Zscaler One
        104    Zscaler Two
        105    Zscaler ZSCloud