A review of HITRUST’s two new service offerings for evaluating and/or certifying your AI.

AI is currently one of the most talked-about subjects in the business world. By now, we’ve all heard about it and are exploring how to integrate it into our organizations. While AI brings numerous new efficiencies and innovative products, it’s crucial for us as security professionals to consider both the risks and opportunities it presents. This is where HITRUST’s new AI assessments come into play.

Understanding AI

Before we dive head first into HITRUST AI assessment options, it’s important that we have an understanding of some of HITRUST’s terminology around AI.

Generative AI

The first type of AI is generative AI which “uses deep learning trained on large datasets to create content, such as written text, code, images, music, simulations and videos, in response to user prompts.” With generative AI, the model is first trained using vast amounts of data, used by the AI to learn patterns and relationships within the provided data. Then the model is fine-tuned for specific applications, such as generating some specific output such as text or images. Lastly, the AI is used by end users to create new content based on the learned patterns and user inputs.

Open-source AI

Open-source AI refers to artificial intelligence where the source code is made freely available for anyone to use, modify, and distribute. Open-source AI makes its components available under license for end users to modify into their preferred form. This can either be a fully functional system and/or discrete elements of a system.

Rules-based AI

Lastly, rules-based AI is a basic AI model that uses a set of prewritten/predetermined rules to make decisions and solve problems. With rules-based AI, developers create a list of rules and facts in order to train the system. “An inference engine then measures the information given against these rules. Here, human knowledge is encoded as rules in the form of if-then statements.” The system then strictly follows its given rules set and only performs the assigned programmed functions.

With the world of AI growing every day, there are bound to be many ways to categorize and define the AI systems and applications out there and in use in your organization, though these are the main categories of AI HITRUST have defined for your HITRUST assessment. If you have any questions on how to categorize potential AI systems you’re thinking of introducing to your environment or ones already in use by your organization, feel free to reach out!

Should I be scoping in my AI?

You may be asking yourself, how do we know if we should be scoping in our AI?

  • The decision to scope in or out the AI in use at your organization will highly depend on the scoping method your organization has chosen for your HITRUST assessment.
  • The main concern is understanding how the AI system or application is interacting with the in-scope systems and data. For example, if the AI in use at your organization is used to quickly reference or perform functions on patient data that would be considered in-scope for your HITRUST engagements (using the follow-the-data scoping method), there would be a high likelihood you would need to include such AI systems in your HITRUST Assessment.

As always, talk to your HITRUST assessor though when deciding whether to scope specific systems, including AI, to your HITRUST engagement.

HITRUST’s AI Risk Management Assessment

With the understanding of some of HITRUST definitions around AI and how to determine whether your AI system should be in-scope for your next HITRUST assessment in mind, let’s move on to the first of two AI assessments available by HITRUST, the AI Risk Management Assessment.

  • The AI Risk Management Assessment was created for organizations considered AI users or deployers and is comprised of 51 risk management control requirements created utilizing ISO/IEC 23894:2023 and NIST AI RMF. This is not a stand-alone assessment, but rather an optional add-on to your existing e1, i1, or r2 assessment, much like an additional compliance factor (such as the HIPAA Security Rule factor) being added in.
  • This assessment targets to assess risks related to AI such as roles and responsibilities or responsible personnel, policies, procedures, plans, and frameworks in place, shared responsibility, training data, data protections, data and AI biases, and more.
  • HITRUST understands that this assessment may very well be the first time many organizations are testing the performance of the controls they have in place related to AI. As such, the scoring of requirements added by the AI RM compliance factor will not affect the overall scoring of your HITRUST e1, i1, or r2 assessment. Instead, scoring for the AI requirements will be provided in a separate insights report to share or store as your organization sees fit.

The HITRUST AI Risk Management Assessment is a perfect add-on assessment for AI users and deployers looking to assess the AI they have in use, without putting their HITRUST compliance in jeopardy.

Cybersecurity Certification for Deployed AI Systems

If your organization is looking for a full certification for the AI in use in your environment then the up-and-coming HITRUST Cybersecurity Certification for Deployed AI Systems might be right for you.

  • Unlike the AI Risk Management Assessment, the HITRUST Cybersecurity Certification for Deployed AI Systems is a tailored assessment with no more than 44 requirements added to your e1, i1, or r2 assessment (number of requirements selected is based on provided answers to 3 scoping questions).
  • The requirements for this assessment are still in draft form but will include a review of topics including threat management, governance and oversight, development, legal and compliance, supply chain, model robustness, access, encryption, logging and monitoring, inventorying, sanitizing methods, and resilience as they relate to your AI systems.
  • The add-on of the Cybersecurity Certification for Deployed AI Systems will result in a separate dedicated certification report related to the AI requirements in your tailored object, again, with the scoring of which not affect your regular e1, i1, or r2 assessment scores.

If your organization is seeking a comprehensive assessment ending in certification for the AI systems in your environment, the upcoming HITRUST Cybersecurity Certification for Deployed AI Systems could be the perfect fit.

HITRUST is in the business of continuous improvement of their CSF so of course they are at the forefront of AI assessments. Whether you decide the AI Risk Management Assessment or the upcoming HITRUST Cybersecurity Certification for Deployed AI Systems is right for your organization, we here at LBMC are always available to talk you through your options or provide a value-added assessment of the controls you have in place.

The LBMC Cybersecurity team is here to support you in achieving confidence and compliance through HITRUST’s new AI-specific assessments. Our HITRUST-certified experts offer reliable guidance on how AI impacts your systems, ensuring that risks are managed, compliance requirements are met, and your AI-driven innovations are secure. From scoping your AI’s role in data interactions to certifying deployed systems, our team is ready to assist with every step of your HITRUST journey.

Reach out to LBMC to explore how we can help secure your AI initiatives and protect your organization’s future.

Content provided by LBMC Senior Cybersecurity Consultant Whitney Baker.