How to Create and Use Formative Assessments at Scale
On-Demand Professional Development Webinar
Presented by: Dr. Kristin Stephens-Martinez, Assistant Professor of the Practice at Duke University, Computer Science Department
Learn from Dr. Kristin Stephens-Martinez how to:
Design formative assessments
Understand a wide variety of questions types
Introduce formative assessments into your class
Dr. Stephens-Martinez will also share an example of her own process for designing formative assessments and how she uses it in her introductory computer science course.
Formative feedback in the broadest sense is information about a student's performance intended to change their behavior or thinking that will improve their learning. This information could be for either the instructor or the student. A primary such activity is formative assessment, which are low-stakes assessments that check a student's understanding while the student is learning that new concept. Examples of formative assessments are quizzes before a lecture based on an assigned reading (or video) and peer instruction questions during a lecture. So how does an instructor create these assessments, incorporate them in a course, and utilize the information gained from them?
This webinar will have three parts. First, we will discuss formative feedback with a focus on formative assessments that can be done remotely and at scale. These will be mainly questions that can be autograded, such as multiple choice or simple text response questions.
Second, we will go over different kinds of questions, especially beyond "what is the output" and "write a function." Finally, we will discuss, as an example, the speaker's process for designing formative assessments and how she uses it in her introductory computer science class. Her process starts with thinking about the assessment's goal, which is often to identify what students do not understand and what she should emphasize in lecture.
By the end of the webinar, participants will know of a specific process to design formative assessments, know of a wide variety of question types, and have a practical example of how to incorporate formative assessments into their class.
Kristin Stephens-Martinez is an Assistant Professor of the Practice at Duke University in the Computer Science Department. She received her Ph.D. in Computer Science from UC Berkeley. Her research lies at the intersection of education and computer science focusing on large classrooms. Her specific research interest is examining data from course tools, with the goal of finding interpretable data-driven insights that inform interventions to improve learning. Her more recent work involved applying mixed methods to constructed response wrong answers from "What would Python display?" question sets. From this analysis, she identified common student errors, which led to an in situ experiment that tested the effectiveness of different kinds of hints. Dr. Stephens-Martinez is also the host of the CS-Ed Podcast, a podcast series focused on the best teaching practices for computer science.