Artificial Intelligence

The use of generative artificial intelligence (AI): AI tools (e.g., ChatGPT, Bard, Claude) is strictly prohibited in all coursework, assessments, and clinical documentation unless explicit, written permission is granted by the course director or program faculty. This includes but is not limited to assignments, discussion posts, reflective writing, clinical decision-making exercises, and patient case documentation.

Any unauthorized use will be considered a violation of the academic integrity policy and may result in disciplinary action.

Risks of AI Use in Medical Education

1.Compromised Critical Thinking and Clinical Reasoning

 PA education prioritizes the development of independent diagnostic and decision-making skills. Reliance on AI can short-circuit this process, leading to superficial learning and poor clinical judgment.

2.Inaccuracies and Hallucinations

AI-generated content may contain factual errors or fabricated references. In clinical medicine, such misinformation can mislead learners and potentially compromise patient safety.

3.Erosion of Professional Identity Formation

Reflective writing, ethical reasoning, and humanistic care are core to PA identity. AI-generated responses diminish the authenticity of self-reflection and devalue personal engagement in learning.

4.Undermining Assessment Validity

Use of AI in assessments designed to evaluate knowledge, synthesis, or professionalism skews outcomes and invalidates the learning process and performance metrics.

5.Data Privacy and Confidentiality Risks

Inputting clinical scenarios or patient data into AI systems may inadvertently breach HIPAA or FERPA guidelines, raising serious ethical and legal concerns.

Powered by SmartCatalog IQ