21 August 2024
These issues allowed researchers access to the service’s internal metadata service (IMDS) and subsequently granted access tokens allowing for the management of cross-tenant resources. If exploited, a malicious actor could have been granted management capabilities for hundreds of resources belonging to Azure customers. Tenable Research reported the issues to Microsoft immediately upon realizing the sensitive nature of the data that could be accessed.
The Azure Health Bot Service is a cloud platform that allows healthcare professionals to deploy AI-powered virtual health assistants. Essentially, the service allows healthcare providers to create and deploy patient-facing chatbots to handle administrative workflows within their environments. To do this, these chatbots will have some amount of access to sensitive patient information, though the information available can vary based on each bot’s configuration.
“Based on the level of access granted, it’s likely that lateral movement to other resources in customer environments would have been possible,” said Jimi Sebree, Senior Staff Research Engineer, Tenable. “The vulnerabilities involved a flaw in the underlying architecture of the chatbot service, rather than the AI models themselves. This highlights the continued importance of traditional web application and cloud security mechanisms in this new age of AI powered chatbots.”
Microsoft has confirmed that mitigations for these issues have been applied to all affected services and regions. No customer action is required.