
Psychology Assessment
Plan Blueprint
Introduction
Examine my comprehensive approach to designing a valid and reliable exam blueprint that fully aligns with specified learning outcomes in an introductory Psychology course.
My Role and Team
-
My Role: Sole instructional designer responsible for the entire assessment plan, including leveraging innovative technologies.
-
Stakeholder: My professor, who provided guidelines and feedback.
Way of Thinking
I leveraged backward design and Bloom’s Taxonomy to systematically align each test item with specific learning outcomes. In this project, I also integrated AI tools (ChatGPT) to generate initial drafts of test items. These AI-generated ideas provided a diverse starting point, which I refined and aligned with rigorous item-writing guidelines.
Implementing Assessments with Blooms Taxonomy
Skills
-
Backward Design Application
-
Bloom’s Taxonomy Alignment
-
Assessment Blueprint Creation
-
AI-Assisted Item Generation
-
Reliability and Validity Enhancement
Tools


Chat GPT
Microsoft Suite

The Problem
Enhancing Assessment Validity
Exams in many courses often fail to measure specified learning outcomes effectively, lacking validity and reliability. This results in incomplete or inaccurate measures of student learning.
Design Story
-
I identified the need for a test plan that ensures proper coverage of eight major content areas (e.g., biopsychology, social psychology) while also measuring knowledge, understanding, application, analysis, and evaluation.
Complexity
-
Balancing multiple cognitive levels demanded a structured blueprint, varied item types, and tight alignment with clear SLOs.

The Designed Solution
A comprehensive test plan specifying each topic, its cognitive-level target, and the item types (multiple-choice, fill-in-the-blank, etc.).
Design Decisions
-
Table of Specifications to ensure balanced coverage and direct alignment with SLOs.
-
Diverse Items using multiple formats—MCQs, true/false, matching, and interpretive exercises—to address different Bloom’s levels.
-
Real-World Scenarios incorporating a case study to evaluate students’ analytical and evaluative skills in practical contexts
AI-Enhanced Design Process
-
I employed AI to draft and iterate test items, which helped explore multiple approaches quickly. Although the AI provided useful first drafts, human oversight was crucial to resolve issues (such as mismatched Bloom’s levels) and ensure fidelity to the assessment guidelines.
Table of Specifications: Ensuring Balanced Assessment

The Result
Though created as a hypothetical assignment, the final assessment plan exemplifies a thoughtful approach to measuring the breadth and depth of student learning.
-
Impact: Demonstrates how careful alignment, varied question types, and the iterative integration of AI-generated ideas can address multiple cognitive levels, increasing the validity and reliability of the assessment.
-
Outcome: A ready-to-use final exam blueprint with 15+ items, ensuring balanced coverage and clearly stated outcomes for future test implementations.