Rapid, Trusted Answers 

Redefining Clinical Decision Support

Dyna AI surfaces the information clinicians need at the point of care, improving the patient and clinician experience.

DynaMedex, DynaMed. and Dynamic Health logos and the 2021 and 2022 best in Klas logo
Looking to collaborate? 


Our Principles for the Responsible Use of Generative AI


We prioritize maintaining our users’ confidence in our information as an authoritative, evidence-based, clinical expert-validated source. We will take a judicious approach to any implementation of AI-based tools, particularly considering the experimental nature of applying generative AI to clinical diagnosis and treatment.

Any potential use of generative AI will be subject to ongoing review for bias, quality, safety, ethics, regulatory considerations, and scientific rigor. With appropriate supervision and safeguards in place, we will responsibly explore the potentially significant benefits and limitations of these tools via collaborative efforts among clinicians, technologists, subject matter experts, editors, and other stakeholders.

1. Quality: Quality: Patient safety is our top priority. Our approach to quality ensures access to trusted, evidence-based content, developed by our clinical experts following our rigorous editorial process. We limit the use of generative AI tools for user-facing applications to information found in our curated content.

2. Security and patient privacy: Data are protected using best practices in data security, in accordance with HIPAA standards. Our systems are designed and monitored according to established safety principles in AI.

3. Transparency: Uses of generative AI-driven technology on our products are clearly labeled to support informed decision-making for our stakeholders. Clinical information is presented with evidence sources.

4. Governance: Clinical experts oversee development and validation of clinical applications of generative AI-based technologies and conduct continuous monitoring for quality and usability.

5. Equity: We are committed to promoting health equity by integrating measures that identify and mitigate both algorithmic and societal bias in generative AI-driven applications, from inception through deployment, with ongoing monitoring.