Shortly after Microsoft took a victory lap publishing its defense of a mega-scale DDoS attack the company’s Azure service was hit with an arguably equally mitigating circumstance.
Microsoft’s Azure customers fell victim to an eight-hour long outage. The global Azure outage included virtual machine services as well as other product related dependencies. Starting around 5 a.m. UTC, or as early as midnight last night on the east coast of the US, customers who attempted to “perform service management operations, meaning start, create, update or delete,” new Windows Virtual Machines were unable to, according to Microsoft’s Azure status history page.
In addition, Azure DevOps and other services dependent on Windows VMs were affected by the outage, to what extent, Microsoft has not clarified.
OnMSFT.com runs on an Azure VM, but one powered by Linux (Ubuntu) to power our WordPress site. We were not affected.
While investigations into the outage are still early, Microsoft has come out and partially attributed the service blackout to “a required artifact version data could not be queried.” Similar to the Facebook outage last week, Azure’s service knockout was due to an internal issue as the company was shifting to a new platform and the required VMGuestAgent string was missing from the repository as engineers made use of Azure Resource Manager (ARM) for migration.
Fortunately, since the Azure outage was an internal hiccup, Microsoft was able to isolate the problem and according to Azure Status History page, the company would work to establish the full root cause in order to head off future occurrences. While not an ironclad promise of guaranteed uptime on Azure, it’s about as close as customers get to reassurances from Microsoft against another outage due to the same issues.