GenAI in Higher Education — Faculty Companion (HKUST)

Helping faculty learn and apply the HKUST CEI Guidelines and Principles on Using Generative AI

Welcome, Faculty Member

Generative AI (GenAI) is reshaping higher education faster than governance can keep pace. This companion helps you navigate HKUST's guidelines — not just by understanding the rules, but by developing principled judgement about when and how AI should support student learning.

The companion is built around the Snap-to-Solve five-step process — Snapshot, Diagnose, Triage, Determine, Communicate — plus the CRAFT governance framework, AI literacy foundations, readiness reflection, and an assessment strategy library.

GenAI policies should flow from what counts as meaningful evidence of learning, not the other way around. Use this companion as a thinking partner, not a compliance checklist.

Navigate All Modules

Your Learning Progress

0 / 7 modules explored

🏛️ CRAFT Governance Framework

Five interdependent areas for responsible GenAI adoption in higher education

Interactive CRAFT Wheel

Click a segment to explore each dimension

Culture Rules Access Familiarity Trust CRAFT Framework

← Click a segment to explore

All Five CRAFT Dimensions

How CRAFT frames this Faculty Companion: Each module in this companion connects to CRAFT. Snap-to-Solve addresses Rules at the course level. Diagnostic Grids help develop Familiarity. Assessment Strategies strengthen Culture. AI Literacies build Familiarity and Trust. The Readiness Radar reflects institutional Access and Governance.

Credit: CRAFT framework by Danny Y.T. Liu and Simon Bates. APRU Whitepaper "Generative AI in Higher Education: Current Practices and Ways Forward" (2025). Landing page | PDF

⚡ Snap-to-Solve Wizard

A five-step process to review and redesign assessments for a world where students use generative AI

Step A — Snapshot: Examine Your Assessment

Before drafting GenAI policies, examine your assessment tasks and their pedagogical rationale. Determine the Intended Learning Outcomes (ILOs), then identify what skills or knowledge your assessment is designed to foster.

Values of Inquiry

Select the values that best describe what your assessment is designed to cultivate. Clicking a value shows its definition and example questions.

Core 8 (Ellerton, 2022):

+ Show additional values

GenAI Risk Notes

Based on your ILOs and selected values, which aspects could GenAI easily simulate? Which genuinely require human judgment?

Grid 1: Cognitive Demand × GenAI Resilience

Map your assessment to understand how complex the thinking is and how easily GenAI can complete the task. The top-right quadrant (High Demand + High Resilience) is the most desirable.

Cognitive Demand (Bloom's) →High
⬆ High Demand
Low Resilience
Ideal for practicing critical AI use; GenAI can assist but student must evaluate deeply
★ High Demand
High Resilience
Most desirable: fosters deep learning and critical thinking; resilient to GenAI
Low Demand
Low Resilience
Least desirable; consider repurposing as low-stakes formative only
Low Demand
⬆ High Resilience
Good for verifying actual learning; difficult for GenAI, easy for human
GenAI Resilience → High

Grid 2: AI Leverage Potential × Required Human Agency

Evaluate whether tasks are assigned to GenAI and students appropriately. The top-right quadrant represents the most productive human-AI collaboration.

AI Leverage Potential →High
High AI Leverage
Low Human Agency
GenAI does heavy lifting; student checks output only — limited learning
★ High AI Leverage
High Human Agency
Both AI power and student judgment are substantial — richest outcomes
Low AI Leverage
Low Human Agency
Low-yield automatable activity — consider dropping or replacing
Low AI Leverage
⬆ High Human Agency
Primarily unassisted human expertise — ideal when verifying foundational skills
Required Human Agency → High

Grid 3: Cognitive Offloading Risk × Collaboration Depth

The bottom-right quadrant is ideal: deep human-AI collaboration with low surrender risk. Use structured offloading scaffolds to move tasks toward this quadrant.

Cognitive Offloading Risk →High
High Risk
Low Collaboration
Cognitive surrender risk — AI replaces human effort; passive consumption
High Risk
High Collaboration
Superficial reliance — copying solutions without paraphrasing; illusion of mastery
Low Risk
Low Collaboration
Strategic delegation for routine tasks; acceptable when human skills are primary
★ Low Risk
High Collaboration
Ideal: deep human-AI synergy; human-AI loops; metacognitive engagement
Collaboration Depth → High

Your Diagnostic Profile

Step C — Triage: Categorise Your Assessment

Select the category that best describes your assessment's current situation. This translates your diagnosis into a concrete design decision.

🤖
Automation Alert
The task can be completed almost entirely by AI with little authentic thinking
🚔
Policing Trap
Preventing AI misuse requires burdensome monitoring that adds more busy-work than learning
⚠️
Outcome Mismatch
After applying AI rules, the task no longer produces evidence of the stated learning outcome
💤
Engagement Gap
Sound on paper, but students are likely to engage only superficially
📎
Over-Scaffold
Evidence requirements are so heavy they swamp feedback and motivation
Clear Path
No red flags; keep with minor tweaks and a clear AI-use statement

Values of Inquiry Check

Ask yourself: "Does this assessment, as currently designed, generate evidence of the Values of Inquiry I selected in Snapshot?" If uncertain, treat this as an Outcome Mismatch.

Step D — Determine: Set GenAI Rules with the AIAS

The AI Assessment Scale (AIAS) provides five non-hierarchical levels. Select the level appropriate for your assessment — no level is inherently better than another.

Credit: AI Assessment Scale by Mike Perkins, Jasper Roe, and Leon Furze. aiassessmentscale.com | DOI: 10.53761/rrm4y757 | DOI: 10.37074/jalt.2025.8.2.15
1
No AI
2
AI-Assisted Planning
3
AI-Assisted Task Completion
4
Full AI
5
AI Exploration

AI Usage Map

Break down AI permissions by assignment stage. This prevents the AIAS level from remaining an abstract label and removes ambiguity for students.

Assignment Stage Permission Notes

Step E — Communicate: Tell Your Students

Draft your communication statements based on your inputs above. Edit as needed, then export.

Student Partnership Tip: Invite students to discuss the proposed AIAS level at the start of semester. Explain why certain boundaries exist and ask what they would add or change. Student co-ownership increases compliance and trust.

📖 Example Walkthrough: Redesigning a Literature Review Essay

This is not a workflow to “output” a policy. It is a pedagogical thinking sequence that helps faculty exercise judgement at each step. Click each step to explore Dr. Chen’s reasoning process.

OUTCOME: Thinking made visible Click steps around the ring 📷 Snapshot 🔍 Diagnose ⚖️ Triage 🎯 Determine 📣 Communicate

📊 Diagnostic Grids

Three 2×2 visual tools for diagnosing how GenAI interacts with your assessments

Y-axis: Cognitive Demand (Bloom's Taxonomy — Remember to Create)
X-axis: GenAI Resilience (Low to High)

Cognitive Demand → Higher
High Demand
Low Resilience
Analyse/Evaluate/Create tasks GenAI can handle well
★ High Demand
High Resilience
Ideal zone — deep thinking AND GenAI-resilient
Low Demand
Low Resilience
Lowest value; GenAI can fully automate
Low Demand
High Resilience
Verifies baseline human knowledge
GenAI Resilience → Higher

Note: grid meaning depends on how you define each axis. A task's cognitive demand is measured through Bloom's Taxonomy action verbs. GenAI resilience is assessed by testing the task against current reasoning models.

Click a quadrant to see detailed guidance

Y-axis: AI Leverage Potential (how much can AI genuinely add?)
X-axis: Required Human Agency (how much student thinking is required?)

AI Leverage Potential → Higher
High AI Leverage
Low Human Agency
AI does heavy lifting; minimal student thinking required
★ High AI Leverage
High Human Agency
Best of both: AI augments human judgment
Low AI Leverage
Low Human Agency
Low-yield automatable activity — consider dropping
Low AI Leverage
High Human Agency
Unaided human expertise — verifies foundational competence
Required Human Agency → Higher

Click a quadrant to see detailed guidance

Tri-System Theory (Shaw & Nave, 2026): GenAI functions as System 3 — a third cognitive system. System 1 is fast intuitive thinking; System 2 is slow deliberate reasoning; System 3 is external AI cognition. Cognitive surrender occurs when System 3 overrides both — participants followed AI recommendations ~80% of the time and their accuracy yoked to AI quality rather than their own reasoning.

Y-axis: Cognitive Offloading Risk (High = cognitive surrender risk)
X-axis: Collaboration Depth (Low = transactional, High = rich iterative human-AI loops)

Cognitive Offloading Risk → Higher
High Risk
Low Collaboration
Cognitive surrender — AI replaces thinking entirely
High Risk
High Collaboration
Superficial reliance — copying without paraphrasing
Low Risk
Low Collaboration
Structured offloading for routine tasks; acceptable with acknowledgement
★ Low Risk
High Collaboration
Ideal: human-AI loops, deep collaboration, metacognitive engagement
Collaboration Depth → Higher

Axis warning: grid meaning depends on how you define "offloading risk" and "collaboration depth" in your disciplinary context. These are analytical lenses, not absolute categories.

Click a quadrant to see detailed guidance

⚠ High Risk × Shallow Collaboration

When tasks land in the top-left quadrant (High Risk, Low Collaboration), immediate redesign is needed. Signs: students submit single final products with no intermediate work; no requirement to explain or critique AI outputs; GenAI can complete the entire task end-to-end.

Redesign moves:

  • Redesign immediately — do not rely on discursive changes alone
  • Add friction and checkpoints (intermediate submissions, live check-ins)
  • Shift modality: add oral component, require personal data, use live performance
  • Require students to annotate and critique AI outputs before incorporating them
  • Build reflective journaling into each stage of GenAI use
  • Use peer-AI co-revision workshops

✅ Structured Cognitive Offloading — When Offloading Enhances Learning

Not all offloading is harmful. When deliberately structured — students delegate lower-order tasks while redirecting cognitive resources toward analysis, evaluation, and reflection — offloading can actually enhance higher-order thinking.

Evidence: Research shows that a semester-long "cognitive offload instruction" model can produce significantly greater gains in critical thinking than traditional instruction, with a large effect size, when paired with metacognitive prompts and peer-AI co-revision.

Design strategies for structured offloading:

  • Require students to annotate and critique AI outputs before incorporating them, keeping System 2 active
  • Use reflective journaling at each stage of GenAI use
  • Employ peer-AI co-revision workshops
  • Add metacognitive prompts: "What did you delegate to AI? Why? What did you learn?"

Key Paradoxes this grid helps address:

  • Efficiency vs. Engagement: Students can complete tasks faster with GenAI, but this often reduces engagement with content
  • Assistance vs. Strategy Breadth: AI's persuasive outputs steer users toward first suggestions, narrowing problem-solving approaches
  • Apparent vs. Actual Higher-Order Thinking: Students may seem to practice higher-order thinking but are actually offloading it to AI — Shaw & Nave (2026) found confidence increased even when AI was systematically wrong

🎯 Assessment Strategies

A unified library of strategies for designing GenAI-aware, learning-centred assessments

🧠 AI Literacies

From operational competencies (little-al) to empowerment-oriented sociotechnical practices (Big-AL)

Understanding AI Literacy: little-al vs Big-AL

Most AI literacy frameworks treat literacy as a portable set of individual competencies — prompting, output evaluation, bias awareness, and policy compliance. This is little-al. It is necessary, but not sufficient.

Big-AL (Big Artificial Literacies) goes further: it names the collective practices through which educational communities govern how AI reshapes judgement, authority, and values. Drawing on sociocultural literacy theory (Street, 1984; Gee, 2015) and critical technology scholarship (Franklin, 1990), Big-AL treats AI literacy as participation in sociotechnical Discourses — not just tool competence.

little-al (operational AI competencies)

  • Understanding what GenAI is and how it works
  • Using tools effectively: prompting, evaluating outputs
  • Recognising bias, hallucinations, limitations
  • Following institutional policies and ethical guidelines
  • Citing AI-generated material appropriately

Answers: "Can individuals use AI competently?"

Big-AL (sociotechnical empowerment)

  • Perceiving AI as a sociotechnical system reshaping authority
  • Monitoring cognitive routes: offloading vs. surrendering
  • Identifying non-delegable practices and values boundaries
  • Recognising gradual disempowerment dynamics
  • Participating in collective governance of AI adoption

Adds: "Can communities govern AI-mediated cognition so agency remains human-governed?"

Key insight: Even well-trained users can drift into routine reliance and cognitive surrender when AI is fast, fluent, and rewarding. Competency frameworks teach people how to obtain high-quality outputs, but do not address how prevalent tool use interacts with cognitive architecture, motivation, or institutional incentives.

The Five Big-AL Literacies

These five literacies are non-linear and interrelated — they should be read as "attention lenses" that co-develop and bleed into one another. Click each dimension to explore.

Big-AL 5 Literacies System-Perception Awareness Cognitive-Route & Output Eval Values-Anchored Practice Human-Centred Agency Governance & Context

← Click a dimension to explore

All Five Big-AL Literacies

Cognitive Delegation Matrix

These are best treated as context-sensitive patterns, not a developmental ladder. The same person or course can show different modes across tasks, weeks, or conditions. Click each pattern to explore its signals and faculty response moves.

🔄 Constructive Offloading

Productive support

⚠️ Routine Reliance

Habitual default

🚨 Cognitive Surrender

High concern — judgement outsourced

🚫 Disuse / Algorithm Aversion

Rejection after salient error

⛓️ Abuse / Enforced Automation

System-imposed displacement

🎯 Design Implications for Faculty

  • Do not treat output quality checks as sufficient for learning-quality assurance.
  • Make route-level evidence visible (decisions, rationale, revisions, calibration, what was delegated).
  • Identify context-specific non-delegable practices (e.g., consequential feedback, mentoring, sensitive advising, high-stakes judgement).
  • Design for agency: require choices, justification, and AI-independent demonstrations where needed.
  • Connect classroom observations to institutional governance conversations.
Big-AL adds an empowerment lens to existing frameworks rather than replacing operational AI literacy. The question is not “skills or critique?” but “what kind of practice and judgement do these skills serve?”

⚠️ Counterpoint / Tension

Efficiency trap: A workflow can look successful by speed, satisfaction, or polished outputs while still eroding judgement, agency, or epistemic fluency.

✅ Constructive Stance

Big-AL is not anti-AI. It legitimises constructive offloading when bounded by metacognitive monitoring, values alignment, and governance clarity.

Think like an alien anthropologist: If you only observed prompts and outputs, what essential parts of learning would remain invisible in your course? Those are candidates for process-visible assessment and agency-preserving design.

📡 AI Risk Management & Readiness

Ten-dimension reflective framework for holistic institutional AI readiness

Anti-compliance safeguard: This tool is designed for holistic, reflective exploration — not scoring, benchmarking, or pass/fail assessment. Uneven profiles are normal; institutions need not be at the same level across all dimensions. There is no total readiness score.

AI Readiness Radar

Click a dimension axis to explore reflective prompts. Adjust sliders in the panels below to update the visualization.

Emerging Developing Established Mature

← Click a dimension axis to explore

All Ten Dimensions — Reflective Exploration

Three-Pillar Framework

Click a pillar to reveal what it explains and what institutions often miss.

Five Faculty AI Literacies (Risk- and Empowerment-Oriented)

Interrelated capacities (not sequential skills) supporting reflective, empowered engagement in teaching and learning. Click a literacy to reveal how it connects to the others.

Faculty AI Literacies Interrelated capacities System-Perception Awareness Cognitive-Route & Output Evaluation Values-Anchored Practice Human-Centred Agency Governance & Context ↑ Click any literacy box

Adaptive Governance Cycle Map

A four-stage cycle: Assess → Engage → Develop → Sustain. Click a stage to explore its focus, output, and example actions.

Assess → Engage → Develop → Sustain Adaptive governance Assess Engage Develop Sustain

Faculty Development & Governance Cycle (Assess → Engage → Develop → Sustain)

1 — Assess
Diagnose practices, perceptions, and risks
2 — Engage
Co-design and dialogue across roles
3 — Develop
Targeted capability building and redesign
4 — Sustain
Institutionalise adaptive governance

📚 Resources & Support

Glossary, HKUST links, training resources, references, and feedback

CEI — Center for Education Innovation

Official HKUST GenAI guidance, policy documents, tools, and workshops.

Visit CEI ↗

HKUST GenAI Policy (2023)

Policy on Generative Artificial Intelligence for Teaching and Learning at HKUST.

CEI Policy Page ↗

HKUST GenAI Platform

Institutional access to GenAI tools for staff and students. Centrally supported and secure.

ITSO ↗

Academic Integrity & Honor Code

HKUST Registry guidance on academic integrity and the Academic Honor Code.

Registry ↗
Note: Links open in a new tab. For the most current information, always check the linked pages directly.

Key Citations

Share Feedback

This companion cannot be perfected through a top-down approach alone. Experiences and feedback from all members of the HKUST community are welcome.

Note: This sends an email via your mail client. For direct contact: cei@ust.hk