Module 11: Academic Integrity & Digital Citizenship in an AI World
C
CurrikiStudio
Module 11 of 15 8–10 Minute Duration

Integrity in
an AI World

Designing an Effective AI Policy for Grades 6–12. Moving beyond “AI = Cheating” to build a culture of transparency, accountability, and digital citizenship.

Learning Outcomes

Define responsible vs. irresponsible AI use based on honesty and transparency.

Integrate AI policy into existing digital citizenship frameworks.

Explain the instructional value of disclosure and reflection requirements.

Draft policy language for AI disclosure and metacognitive reflection.

“Academic integrity still matters in an AI world—but the path forward is not to treat every use of AI as cheating. The stronger approach is to define what honest, responsible use looks like.

Integrity as Honesty, Not Absence

Traditional concerns like plagiarism still apply. What changes with AI is the emergence of a “middle ground” between independent work and substitution.

Responsible Use

  • Using AI to clarify a complex concept or check understanding.
  • Brainstorming questions before producing original work.
  • Revision support when disclosure is provided.

Irresponsible Use

  • Submitting AI-generated work as original student thinking.
  • Using AI in prohibited assignments or hidden ways.
  • Offloading core learning the task is meant to assess.

AI as a Digital Citizenship Issue

AI policy should not be disconnected. It should be a bridge to the ethical technology skills students already need.

Evaluate Sources
Bias Awareness
Respect Privacy
Ethical Usage

The Power of Disclosure

When students are asked to name how AI was used, integrity becomes something they practice—not just something they are warned about.

Sample Reflection Prompts:
  • • “Did you use AI? If so, how?”
  • • “What did the AI do vs. what did you do?”
  • • “What suggestions did you keep or reject?”

Why Disclosure Matters:

  • Promotes metacognition and critical thinking.
  • Normalizes help-seeking over secrecy.
  • Provides clear documentation for fair review.

Building a Safe Disclosure Culture

“If students believe any AI mention will trigger punishment, they will stop asking what responsible use looks like. They don’t become more ethical—they become more secretive.”

Expected Disclosure Teachable Moments Fair Evidence

Ethics & Integrity Scenarios

Scenario A: Honest Disclosure

Practicing Transparency

A student uses AI for brainstorming. On submission, they include a note explaining what they kept and what they wrote themselves. Disclosure strengthens the integrity of the work.

Scenario B: Hidden Rewrite

Undisclosed Substitution

A student writes a draft and has AI rewrite it into “polished prose.” By hiding the tool’s role, the work no longer reflects the student’s true ability.

Scenario C: The Safe Question

Culture of Silence

A student avoids asking if AI can clarify a reading because they fear being labeled a cheater. They use it secretly instead of engaging in an ethical discussion.

Scenario D: Reflection

Policy in Action

A middle school adds a “reflection box” to assignments. Students become more aware of over-reliance and more thoughtful about where their thinking starts and AI ends.

Capstone Milestone 11

Define Responsible AI Use

How should your school define responsible vs. irresponsible AI use? In 3–5 sentences, describe the core expectations for disclosure, honesty, and accountability.