Learning Objectives by Activity
This table organizes the course activities by date and identifies the key learning objectives for each session.
Date | Week | Activity | Learning Objectives |
---|---|---|---|
Week 1 | |||
Wed Sep 11 | W1D1 | Course Introduction | • Articulate personal goals and interests in human-centered AI • Understand course structure and collaborative approach • Analyze the current AI landscape critically • Establish community norms for AI use in learning |
Fri Sep 13 | W1D2 | Vibe Prototyping | • Create rapid AI-powered prototypes using vibe-coding tools • Design user testing scenarios for interactive systems • Practice iterative design thinking with immediate user feedback • Reflect on the prototyping process and user experience • Understand project logistics and team formation |
Week 2 | |||
Mon Sep 16 | W2D1 | Prompting | • Understand LLM conversation patterns and system prompts • Design effective prompts for specific tasks • Implement structured outputs and function calling • Apply context engineering techniques • Evaluate prompt effectiveness systematically |
Wed Sep 18 | W2D2 | Course Advisor Bot | • Build end-to-end RAG (Retrieval-Augmented Generation) systems • Implement structured data models with Pydantic • Practice “owning your control flow” in AI systems • Design reliable interfaces between system components • Apply agentic patterns with controlled interactions |
Fri Sep 20 | W2D3 | Testing Email Feedback Bots | • Measure and compare human vs. LLM evaluation reliability • Design evaluation prompts and criteria • Understand rating variance and measurement challenges • Plan deployment evaluation strategies • Create regression testing approaches for AI systems |
Week 3 | |||
Mon Sep 23 | W3D1 | Review & Project Work | • Synthesize key concepts: prompts as programs, tool calling, evaluation challenges • Apply learned techniques to project development • Practice collaborative problem-solving • Reflect on AI tool usage for learning and development |
Wed Sep 25 | W3D2 | Project Work | • Develop specific, narrow use cases for human-centered AI • Apply human-AI interaction guidelines • Iterate on project proposals based on feedback • Submit detailed project proposals |
Fri Sep 27 | W3D3 | Testing Course Advisor Bots | • Conduct stakeholder analysis for AI systems • Identify embedded values and assumptions in technology design • Perform systematic bias analysis and red teaming • Evaluate AI systems across multiple dimensions (correctness, performance, fairness) • Apply critical analysis frameworks to project work • Reflect on overlooked stakeholder needs and systematic problems |
Key Learning Themes
Throughout these activities, students develop competencies in:
- Technical Implementation: From basic prompting to complex RAG systems with structured outputs
- Human-Centered Design: Rapid prototyping, user testing, and stakeholder analysis
- Critical Evaluation: Systematic testing, bias analysis, and reliability measurement
- Ethical Considerations: Understanding embedded values, fairness concerns, and broader social impacts
- Collaborative Problem-Solving: Peer testing, team projects, and iterative improvement
Assessment Alignment
These learning objectives align with the course’s project-based assessment approach, where students apply these skills to develop and critically analyze their own human-centered AI system throughout the semester.
Future Learning Objectives (Weeks 4-7)
Based on the course syllabus and forward-looking statements in course materials, the following learning objectives are planned for the remaining weeks:
Week 4: Local Models and Advanced AI Techniques
- Local LLM Setup: Install and run local language models using Ollama
- Model Comparison: Compare capabilities and constraints of different model sizes
- Advanced API Features: Explore multimodal I/O, reasoning models, and workflow orchestration
- Model Context Protocol (MCP): Understand structured tool calling and agent frameworks
Week 5: Design Thinking and User Research
- Jobs-to-be-Done Framework: Apply user-centered research methodologies
- User Interview Techniques: Practice effective user research methods (including how not to interview)
- Human Flourishing Frameworks: Evaluate technology’s impact on human wellbeing using ethical frameworks
- Project Development: Advance critical analysis and technical evaluation components
Week 6: Advanced Evaluation and Industry Practices
- Industry Evaluation Methods: Analyze how companies use AI evaluations for model switching decisions
- LLM-as-Judge Techniques: Implement sophisticated evaluation frameworks
- Goodhart’s Law and Measurement: Understand limitations of quantitative evaluation
- Application Case Studies: Examine real-world AI implementations using Sloan Management Review cases
- Historical Perspectives: Study human-AI interaction evolution (ELIZA) and automation theory
Week 7: Project Presentations and Reflection
- Public Presentation: Present final projects to broader campus community
- Portfolio Development: Create professional portfolios of project work
- Peer Feedback: Provide and receive constructive feedback on projects
- Course Reflection: Synthesize learning across technical, ethical, and design dimensions
- Future Applications: Identify how course concepts apply to ongoing work and career development
Additional Planned Topics
- Automation Levels: Understand different levels of automation in systems like autonomous vehicles
- Over-reliance Problems: Recognize and mitigate problems of automation dependency
- RAG Security: Address security risks in retrieval-augmented generation systems
- Truth and Misinformation: Analyze AI systems’ relationship to truth and factual accuracy
- Red Teaming and Defense: Advanced techniques for identifying and mitigating AI system vulnerabilities
Project Component Learning Objectives
Throughout weeks 4-7, students will complete their three-part project:
Critical Analysis Component: - Analyze design decisions’ impact on human flourishing - Envision and evaluate alternative design approaches - Consider agency, capability building, privacy, and relationship to work/learning
Technical Evaluation Component: - Build functional toy models of AI systems - Design and conduct quantitative evaluations - Document performance analysis and system comparisons
User-Centered Design Component: - Develop testable prototypes incorporating user feedback - Iterate based on user testing results - Reflect on how building reveals blind spots in original analysis