Learning Objectives by Activity

Note: This page is largely AI-generated. Please report any errors or omissions to the instructor.

This table organizes the course activities by date and identifies the key learning objectives for each session.

Date Week Activity Learning Objectives
Week 1
Wed Sep 3 W1D1 Course Introduction • Articulate personal goals and interests in human-centered AI
• Understand course structure and collaborative approach
• Analyze the current AI landscape critically
• Establish community norms for AI use in learning
• Explore project interests and find potential collaborators
Fri Sep 5 W1D2 Vibe Prototyping • Create rapid AI-powered prototypes using vibe-coding tools
• Design specific user tasks to test prototypes with peers
• Conduct informal usability testing and observe user interactions
• Compare predicted vs. actual user behavior
• Reflect on low-fidelity prototyping as a design exploration method
• Build community connections through collaborative testing
Week 2
Mon Sep 8 W2D1 Prompting • Understand how LLMs work (prompts, responses, system messages, tokens)
• Design effective prompts by treating LLMs as contractors needing clear instructions
• Structure prompts to avoid prompt injection using delimiters
• Implement LLM functionality via API calls in Python
• Evaluate variability in LLM outputs due to sampling temperature
Wed Sep 10 W2D1 (cont.) Prompting • Apply context engineering techniques to provide relevant information
• Use function calling (tools) to allow LLMs to request additional information
• Identify best practices for prompt engineering
• Understand when and how to use context engineering for improved performance
Fri Sep 12 W2D2 Course Advisor Bot • Implement Agentic RAG (Retrieval-Augmented Generation) techniques
• Use structured output (Pydantic models) to constrain LLM responses
• Design and implement search systems over structured data
• Own control flow in agentic systems rather than letting LLM fully drive interaction
• Test and measure AI performance across multiple dimensions (failure rate, latency, relevance)
Week 3
Mon Sep 15 W2D3 Testing Email Feedback Bots • Measure inter-rater reliability in human evaluation of AI outputs
• Design evaluation prompts for LLMs to serve as judges
• Compare rating-first vs. reasoning-first evaluation approaches
• Quantify variability in both human and LLM evaluations
• Design regression tests for ongoing performance monitoring
• Understand the role of guidelines and rubrics in evaluation
Wed Sep 18 W3D2 Project Work • Develop specific, narrow use cases for human-centered AI
• Apply human-AI interaction guidelines
• Iterate on project proposals based on feedback
• Submit detailed project proposals
Fri Sep 19 W3D3 Testing Course Advisor Bots • Conduct stakeholder analysis to identify who is affected beyond direct users
• Recognize how technology embeds values through design choices
• Perform red teaming to uncover technical flaws and potential social harms
• Test with edge cases, adversarial inputs, and diverse user scenarios
• Identify systematic bias patterns that create predictable unfairness
• Apply the “ripple effect” method to map broader AI system impacts
Week 4
Mon Sep 22 W4D1 Local LLMs • Install and run local LLMs using Ollama
• Understand resource requirements (compute, memory, storage, disk space)
• Compare local vs. cloud LLM deployment trade-offs (cost, privacy, latency, performance, control)
• Measure LLM performance metrics (time to first token, total generation time)
• Examine model specifications (parameters, context length, licensing)
• Understand caching mechanisms (model loading, KV cache)
Wed Sep 24 W4D2 Course Progress & Student Input • Reflect on most helpful activities and interesting discussions so far
• Provide input on future course direction (activities and readings)
• Synthesize learning from prototyping, LLM programming, and evaluation
Fri Sep 26 W4D3 Design Norms & System Analysis • Apply normative design lenses (transparency, justice, trust, caring) to AI systems
• Critically analyze real AI systems (Perusall’s comment quality scoring)
• Envision and evaluate alternative design choices
• Reflect on personal growth in AI literacy
• Consider how course projects contribute to professional portfolios
Week 5
Mon Oct 1 W5D1 Recommender Systems Analysis • Examine design decisions in recommendation algorithms
• Understand how success metrics affect user behavior
• Apply “Guidelines for Human-AI Interaction” to projects
• Learn about Model Context Protocol (MCP) architecture
• Update project proposals with refined scope
Wed Oct 3 W5D2 Agents & Tool Calling • Define tools (functions) that LLMs can call to extend capabilities
• Implement agentic loops where LLMs chain multiple tool calls
• Separate tool definitions from agent logic using MCP
• Build MCP servers that expose tools via standard protocol
• Build MCP clients that connect to tool servers
• Maintain conversational context across multiple user inputs
• Design system prompts to guide agent behavior in tool usage
Fri Oct 5 W5D3 Deep Research & Goodhart’s Law • Understand Goodhart’s Law: when metrics become targets they lose validity
• Analyze system prompts and agentic loop design patterns
• Examine deep research agent architectures
• Discuss what we desire vs. what we optimize for
• Reflect on reductionism and perverse incentives in AI systems
Week 6
Mon Oct 6 W6D1 Multimodal AI & Self-Assessment • Discuss training data sources for image, video, and audio models
• Evaluate challenges in assessing multimodal AI quality
• Conduct self-assessment across all course learning outcomes
• Identify strongest and weakest areas of growth
• Continue Course Advisor v2 work
Wed Oct 8 W6D2 Peer Feedback • Conduct structured peer review using project rubric
• Provide constructive feedback to other teams
• Receive and integrate feedback on own project
• Reflect on feedback process and implementation plans
Fri Oct 10 W6D3 Debate • Distinguish strong arguments from weak arguments based on evidence
• Support claims with specific citations from course readings
• Challenge existing arguments with clarifying questions and counterpoints
• Recognize complexity in controversial questions about AI
• Build arguments that extend rather than repeat existing points
• Synthesize multiple perspectives on contentious AI issues
• Practice evidence-based argumentation
Week 7
Mon Oct 13 W7D1 Project Presentations • Present final projects demonstrating technical implementation
• Explain critical analysis of design decisions and alternatives
• Showcase user-centered design process and iterations
• Provide constructive feedback on peer presentations
Wed Oct 15 W7D2 Project Presentations & Reflection • Complete project presentations
• Synthesize learning across technical, ethical, and design dimensions
• Reflect on personal growth throughout the course
• Identify how course concepts apply to ongoing work and career development

Key Learning Themes

Throughout these activities, students develop competencies in:

  • Technical Implementation: From basic prompting to complex RAG systems, structured outputs, tool calling, local models, and agentic systems
  • Human-Centered Design: Rapid prototyping, user testing, stakeholder analysis, and peer feedback
  • Critical Evaluation: Systematic testing, bias analysis, reliability measurement, and understanding evaluation limitations (Goodhart’s Law)
  • System Analysis: Analyzing existing AI systems (recommender systems, Perusall, deep research agents) through multiple lenses
  • Normative Design Frameworks: Applying design norms (transparency, justice, trust, caring) and considering human flourishing
  • Ethical Considerations: Understanding embedded values, fairness concerns, broader social impacts, and debates about privacy, education, and work
  • Collaborative Problem-Solving: Peer testing, team projects, iterative improvement, and constructive debate

Assessment Alignment

These learning objectives align with the course’s project-based assessment approach, where students apply these skills to develop and critically analyze their own human-centered AI system throughout the semester.

Additional Activities

Throughout the course, students also complete:

Course Advisor Bot 2.0

An extended project to: - Architect conversational AI systems that maintain context across multiple turns - Design and implement MCP tools with well-defined interfaces for agents - Evaluate AI system performance using automated testing frameworks - Analyze security and trust implications of tool-based AI systems - Apply stakeholder-centered design to identify and address systematic biases

Project Component Learning Objectives

Throughout weeks 4-7, students complete a three-part project that integrates:

Critical Analysis Component: - Analyze design decisions’ impact on human flourishing - Envision and evaluate alternative design approaches - Consider agency, capability building, privacy, and relationship to work/learning

Technical Evaluation Component: - Build functional toy models of AI systems - Design and conduct quantitative evaluations - Document performance analysis and system comparisons

User-Centered Design Component: - Develop testable prototypes incorporating user feedback - Iterate based on user testing results - Reflect on how building reveals blind spots in original analysis