GenAI in Higher Education — Faculty Companion (HKUST)
Helping faculty learn and apply the HKUST CEI Guidelines and Principles on Using Generative AI
Welcome, Faculty Member
Generative AI (GenAI) is reshaping higher education faster than governance can keep pace. This companion helps you navigate HKUST's guidelines — not just by understanding the rules, but by developing principled judgement about when and how AI should support student learning.
The companion is built around the Snap-to-Solve five-step process — Snapshot, Diagnose, Triage, Determine, Communicate — plus the CRAFT governance framework, AI literacy foundations, readiness reflection, and an assessment strategy library.
GenAI policies should flow from what counts as meaningful evidence of learning, not the other way around. Use this companion as a thinking partner, not a compliance checklist.
Navigate All Modules
Your Learning Progress
0 / 7 modules explored
🏛️ CRAFT Governance Framework
Five interdependent areas for responsible GenAI adoption in higher education
Framework by Danny Y.T. Liu and Simon Bates, from APRU Whitepaper (2025)
Interactive CRAFT Wheel
Click a segment to explore each dimension
← Click a segment to explore
All Five CRAFT Dimensions
Credit: CRAFT framework by Danny Y.T. Liu and Simon Bates. APRU Whitepaper "Generative AI in Higher Education: Current Practices and Ways Forward" (2025). Landing page | PDF
⚡ Snap-to-Solve Wizard
A five-step process to review and redesign assessments for a world where students use generative AI
Step A — Snapshot: Examine Your Assessment
Before drafting GenAI policies, examine your assessment tasks and their pedagogical rationale. Determine the Intended Learning Outcomes (ILOs), then identify what skills or knowledge your assessment is designed to foster.
Values of Inquiry
Select the values that best describe what your assessment is designed to cultivate. Clicking a value shows its definition and example questions.
Core 8 (Ellerton, 2022):
+ Show additional values
GenAI Risk Notes
Based on your ILOs and selected values, which aspects could GenAI easily simulate? Which genuinely require human judgment?
Grid 1: Cognitive Demand × GenAI Resilience
Map your assessment to understand how complex the thinking is and how easily GenAI can complete the task. The top-right quadrant (High Demand + High Resilience) is the most desirable.
Low Resilience
High Resilience
Low Resilience
⬆ High Resilience
Grid 2: AI Leverage Potential × Required Human Agency
Evaluate whether tasks are assigned to GenAI and students appropriately. The top-right quadrant represents the most productive human-AI collaboration.
Low Human Agency
High Human Agency
Low Human Agency
⬆ High Human Agency
Grid 3: Cognitive Offloading Risk × Collaboration Depth
The bottom-right quadrant is ideal: deep human-AI collaboration with low surrender risk. Use structured offloading scaffolds to move tasks toward this quadrant.
Low Collaboration
High Collaboration
Low Collaboration
High Collaboration
Your Diagnostic Profile
Step C — Triage: Categorise Your Assessment
Select the category that best describes your assessment's current situation. This translates your diagnosis into a concrete design decision.
Values of Inquiry Check
Ask yourself: "Does this assessment, as currently designed, generate evidence of the Values of Inquiry I selected in Snapshot?" If uncertain, treat this as an Outcome Mismatch.
Step D — Determine: Set GenAI Rules with the AIAS
The AI Assessment Scale (AIAS) provides five non-hierarchical levels. Select the level appropriate for your assessment — no level is inherently better than another.
AI Usage Map
Break down AI permissions by assignment stage. This prevents the AIAS level from remaining an abstract label and removes ambiguity for students.
| Assignment Stage | Permission | Notes |
|---|
Step E — Communicate: Tell Your Students
Draft your communication statements based on your inputs above. Edit as needed, then export.
📖 Example Walkthrough: Redesigning a Literature Review Essay
This is not a workflow to “output” a policy. It is a pedagogical thinking sequence that helps faculty exercise judgement at each step. Click each step to explore Dr. Chen’s reasoning process.
📊 Diagnostic Grids
Three 2×2 visual tools for diagnosing how GenAI interacts with your assessments
Y-axis: Cognitive Demand (Bloom's Taxonomy — Remember to Create)
X-axis: GenAI Resilience (Low to High)
Low Resilience
High Resilience
Low Resilience
High Resilience
Note: grid meaning depends on how you define each axis. A task's cognitive demand is measured through Bloom's Taxonomy action verbs. GenAI resilience is assessed by testing the task against current reasoning models.
Click a quadrant to see detailed guidance
Y-axis: AI Leverage Potential (how much can AI genuinely add?)
X-axis: Required Human Agency (how much student thinking is required?)
Low Human Agency
High Human Agency
Low Human Agency
High Human Agency
Click a quadrant to see detailed guidance
Y-axis: Cognitive Offloading Risk (High = cognitive surrender risk)
X-axis: Collaboration Depth (Low = transactional, High = rich iterative human-AI loops)
Low Collaboration
High Collaboration
Low Collaboration
High Collaboration
Axis warning: grid meaning depends on how you define "offloading risk" and "collaboration depth" in your disciplinary context. These are analytical lenses, not absolute categories.
Click a quadrant to see detailed guidance
⚠ High Risk × Shallow Collaboration
When tasks land in the top-left quadrant (High Risk, Low Collaboration), immediate redesign is needed. Signs: students submit single final products with no intermediate work; no requirement to explain or critique AI outputs; GenAI can complete the entire task end-to-end.
Redesign moves:
- Redesign immediately — do not rely on discursive changes alone
- Add friction and checkpoints (intermediate submissions, live check-ins)
- Shift modality: add oral component, require personal data, use live performance
- Require students to annotate and critique AI outputs before incorporating them
- Build reflective journaling into each stage of GenAI use
- Use peer-AI co-revision workshops
✅ Structured Cognitive Offloading — When Offloading Enhances Learning
Not all offloading is harmful. When deliberately structured — students delegate lower-order tasks while redirecting cognitive resources toward analysis, evaluation, and reflection — offloading can actually enhance higher-order thinking.
Evidence: Research shows that a semester-long "cognitive offload instruction" model can produce significantly greater gains in critical thinking than traditional instruction, with a large effect size, when paired with metacognitive prompts and peer-AI co-revision.
Design strategies for structured offloading:
- Require students to annotate and critique AI outputs before incorporating them, keeping System 2 active
- Use reflective journaling at each stage of GenAI use
- Employ peer-AI co-revision workshops
- Add metacognitive prompts: "What did you delegate to AI? Why? What did you learn?"
Key Paradoxes this grid helps address:
- Efficiency vs. Engagement: Students can complete tasks faster with GenAI, but this often reduces engagement with content
- Assistance vs. Strategy Breadth: AI's persuasive outputs steer users toward first suggestions, narrowing problem-solving approaches
- Apparent vs. Actual Higher-Order Thinking: Students may seem to practice higher-order thinking but are actually offloading it to AI — Shaw & Nave (2026) found confidence increased even when AI was systematically wrong
🎯 Assessment Strategies
A unified library of strategies for designing GenAI-aware, learning-centred assessments
🧠 AI Literacies
From operational competencies (little-al) to empowerment-oriented sociotechnical practices (Big-AL)
Understanding AI Literacy: little-al vs Big-AL
Most AI literacy frameworks treat literacy as a portable set of individual competencies — prompting, output evaluation, bias awareness, and policy compliance. This is little-al. It is necessary, but not sufficient.
Big-AL (Big Artificial Literacies) goes further: it names the collective practices through which educational communities govern how AI reshapes judgement, authority, and values. Drawing on sociocultural literacy theory (Street, 1984; Gee, 2015) and critical technology scholarship (Franklin, 1990), Big-AL treats AI literacy as participation in sociotechnical Discourses — not just tool competence.
little-al (operational AI competencies)
- Understanding what GenAI is and how it works
- Using tools effectively: prompting, evaluating outputs
- Recognising bias, hallucinations, limitations
- Following institutional policies and ethical guidelines
- Citing AI-generated material appropriately
Answers: "Can individuals use AI competently?"
Big-AL (sociotechnical empowerment)
- Perceiving AI as a sociotechnical system reshaping authority
- Monitoring cognitive routes: offloading vs. surrendering
- Identifying non-delegable practices and values boundaries
- Recognising gradual disempowerment dynamics
- Participating in collective governance of AI adoption
Adds: "Can communities govern AI-mediated cognition so agency remains human-governed?"
The Five Big-AL Literacies
These five literacies are non-linear and interrelated — they should be read as "attention lenses" that co-develop and bleed into one another. Click each dimension to explore.
← Click a dimension to explore
All Five Big-AL Literacies
Cognitive Delegation Matrix
These are best treated as context-sensitive patterns, not a developmental ladder. The same person or course can show different modes across tasks, weeks, or conditions. Click each pattern to explore its signals and faculty response moves.
🔄 Constructive Offloading
▼Productive support
⚠️ Routine Reliance
▼Habitual default
🚨 Cognitive Surrender
▼High concern — judgement outsourced
🚫 Disuse / Algorithm Aversion
▼Rejection after salient error
⛓️ Abuse / Enforced Automation
▼System-imposed displacement
🎯 Design Implications for Faculty
- Do not treat output quality checks as sufficient for learning-quality assurance.
- Make route-level evidence visible (decisions, rationale, revisions, calibration, what was delegated).
- Identify context-specific non-delegable practices (e.g., consequential feedback, mentoring, sensitive advising, high-stakes judgement).
- Design for agency: require choices, justification, and AI-independent demonstrations where needed.
- Connect classroom observations to institutional governance conversations.
⚠️ Counterpoint / Tension
Efficiency trap: A workflow can look successful by speed, satisfaction, or polished outputs while still eroding judgement, agency, or epistemic fluency.
✅ Constructive Stance
Big-AL is not anti-AI. It legitimises constructive offloading when bounded by metacognitive monitoring, values alignment, and governance clarity.
📡 AI Risk Management & Readiness
Ten-dimension reflective framework for holistic institutional AI readiness
AI Readiness Radar
Click a dimension axis to explore reflective prompts. Adjust sliders in the panels below to update the visualization.
← Click a dimension axis to explore
All Ten Dimensions — Reflective Exploration
- High Operational Readiness + Low Faculty PD may indicate capacity under-utilisation risk — infrastructure exists but practitioners may struggle to leverage it
- Strong Stakeholder Engagement tends to act as a catalyst for improvements in Teaching, Learning, and Assessment Strategies
- Gaps in Governance + High AI Literacy may create inconsistent practices that undermine trust
Note: These are interpretive patterns that may / can / tend to apply — not prescriptions about what must happen.
Framework from McMinn, S. & Lu, Z. (2026). “AI Risk Management in Higher Education.” CEI, HKUST.
Three-Pillar Framework
Click a pillar to reveal what it explains and what institutions often miss.
Five Faculty AI Literacies (Risk- and Empowerment-Oriented)
Interrelated capacities (not sequential skills) supporting reflective, empowered engagement in teaching and learning. Click a literacy to reveal how it connects to the others.
Adaptive Governance Cycle Map
A four-stage cycle: Assess → Engage → Develop → Sustain. Click a stage to explore its focus, output, and example actions.
Faculty Development & Governance Cycle (Assess → Engage → Develop → Sustain)
📚 Resources & Support
Glossary, HKUST links, training resources, references, and feedback
CEI — Center for Education Innovation
Official HKUST GenAI guidance, policy documents, tools, and workshops.
Visit CEI ↗HKUST GenAI Policy (2023)
Policy on Generative Artificial Intelligence for Teaching and Learning at HKUST.
CEI Policy Page ↗HKUST GenAI Platform
Institutional access to GenAI tools for staff and students. Centrally supported and secure.
ITSO ↗Academic Integrity & Honor Code
HKUST Registry guidance on academic integrity and the Academic Honor Code.
Registry ↗Key Citations
Share Feedback
This companion cannot be perfected through a top-down approach alone. Experiences and feedback from all members of the HKUST community are welcome.
Note: This sends an email via your mail client. For direct contact: cei@ust.hk