Generative AI Glossary for Healthcare Risk Managers

Explore a comprehensive glossary of generative AI terms tailored for healthcare risk managers to enhance understanding and decision-making.
Generative AI Glossary for Healthcare Risk Managers

Compiled by SafeQual Health Software Team 3/18/2024

Scope: Glossary of terms for understanding and reasoning about governance, selection, architecture, and implementation of robust AI systems.



Probability of receiving a desired output.

Actionable intelligence

Information that can be used in decision making.

Adversarial example

Prompt designed to cause a model to produce undesired completion, usually generated by injecting an insignificant perturbation into a clean example.

Adversarial training

Using adversarial as well as clean examples in model training to improve adversarial robustness.


Capacity of a model to exhibit a degree of initiative or independence.


Steering a model toward or the quality of being consistent with a group’s intended goals, preferences, standards, and ethical principles for example safety culture or high reliability organization (HRO)


Backward alignment

Ensures practical alignment of trained models by evaluating their performance in realistic environments and implementing regulatory guardrails. (see also: Forward Alignment)



Assigning a category to a text.


Mechanism of structuring information flow between AI components to accomplish complex tasks.


Process or technique of assigning a finite set of categories to a text that meaningfully describes its contents.


Output of a generative model in response to a prompt. (see also: Prompt)

Controlled vocabulary

A curated set of words or phrases applicable to a specific industry, application, or task.

Conversational model

A model optimized for generating completions in the context of past interactions.


The presence of distinct semantic elements in the same context.


Domain model

A model that is optimized for a specific industry or a task.



A high-dimensional numerical vector representation of text semantics that allows use of numerical algorithms in text analysis. (see also Semantics, Semantic Search, and Similarity). 


Extracting meaningful or relevant information from text.

Extractive summarization

Identifies and groups together significant portions of a text to form a concise summary.



A prompting or leaning technique using a small number of in-context examples, in contrast to none or many. (see also: zero-shot, fine-tuning)


Modifying an existing foundational, pre-trained model by training on additional industry or context specific data to improve its performance. (see zero-shot, few-shot, training set)

Foundational model

A baseline model trained on unspecialized body of knowledge. (see also: fine-tuning, Domain model)

Forward alignment

Aims to produce trained models that follow alignment requirements. (see also: Backward alignment)


Generative AI

Techniques and algorithms that learn from existing artifacts to produce new ones.

Generative summarization

Distilling core content from long text for easier comprehension. 


Creation and enforcement of rules that ensure safe development and deployment of AI.


Mapping completion output to available factual sources.



Fabricated data in completions that appear plausible and convincing but, in fact, wrong or inaccurate.


A special case of hallucinations where fabricated citations or references to sources that are wrong or inaccurate are presented as fact. (e.g., ‘New York lawyers sanctioned for using fake ChatGPT cases in legal brief’



Large language models are produced from vast amounts of textual data to achieve general-purpose generation and classification capabilities. (see also: Generative AI)



Text used as an input to generative model. (see also: Completion)

Prompt engineering

Designing and optimizing input to produce effective and accurate completions, usually through extensive experimentation.


Retrieval-augmented generation (RAG)

Including trusted knowledge sources outside of the unspecialized body of knowledge to improve quality of and confidence in output, usually in the form of citations and attributions.

RICE (Robustness, Interpretability, Controllability, and Ethicality)

Principles that characterize the objectives of alignment. 


Return on investment in AI. A pun on ROI. 😊 S

Semantic Search

Using query intent and contextual meaning, in contrast to lexical properties. (see also Embedding, Similarity).


Study of relations between linguistic forms and concepts and their mental representations. (see also Embedding, Similarity).


Overall disposition expressed in a text.

Sentiment analysis

Identifying sentiment in a text.


A degree of semantic proximity of texts, usually expressed numerically. (see also Semantics Search, Embedding)


Creating a short, accurate, and fluent digest of a longer text, which preserves its important information and overall meaning.



A given degree of unpredictability of completions, often determining the balance between accuracy and potential usefulness.

Test set

A collection of sample prompts representative of the type of challenges used to evaluate the robustness and usefulness of a model.

Training set

A balanced collection of sample prompts and their desired completions used in fine-tuning models to recognize specific problem patterns. (see also fine-tuning, domain specific models)



Prompting ability or technique that does not rely on any examples. (see few-shot, fine-tuning)

Sources for this Document:

Created and maintained by SafeQual Health,

Further Reading

For people with strong interest in Alignment, Ethics, and Robustness in AI.

Ji, J., Qiu, T., Chen, B., Zhang, B., Lou, H., Wang, K., … & Gao, W. (2023). AI alignment: A comprehensive survey. arXiv preprint arXiv:2310.19852.