1) Authentic Assessment Definition: An authentic assessment is one in which the student creates or composes something that is intended for a real audience for a real purpose in a real forum and that receive real feedback from a real audience outside the classroom (so the student gets feedback from a person or people other than the teacher and other students in the class). The teacher and other students may also offer feedback and assessment at any point during the creation or composition process, but the outside audience’s feedback is required and is taken into account in the grade, even if the teacher disagrees with the judgements of the real audience. 2) Lower level authentic assessment Definition: Assignments that require students to produce products that are modeled on compositions intended for a real audience for a real purpose in a real forum, but that are not actually submitted to a real audience. These are not as valuable as full authentic assignments that garner real feedback from a real audience outside the classroom, but they do at least help students to understand how they can shape compositions for real audiences and real purposes and for a real forum. An example of a “lower level authentic” assignment would be requiring students to write a letter to the editor of Newsday (a specific forum), but that would not actually be submitted to Newsday nor to any other real-world forum beyond the classroom. Students may also be required to compose for an imagined audience/purpose/forum, as long as it is modeled on some reality, such as a job interview for a fake company, or a market report for a product that doesn’t really exist. The teacher in a lower-level authentic assessment task is completely in control of feedback and grading, and thus the feedback students receive. 3) Educative Assessment Definition: Educative assessment is a type of learning-centered assessment that is designed to not only test, but also improve and educate student performance. The term was introduced by Wiggins (1998) and further elaborated on by Fink (2003, 2016). Educative assessment teaches by offering authentic tasks and by providing rich feedback to students and their teachers. Authentic tasks require real-world use of knowledge, they require a quality product and a justification of student answers. In addition to (or instead of) a score, student performance receives usable and diagnostic feedback through which the student is able to self-adjust their performance as needed. According to Wiggins (1998), educative assessment begins with the formulation of the achievement target or the desired result. Next, performance standards are established. Then criteria that make standards more explicit and define success are developed. Rubrics are used as the most specific example of a tool that enables student improvement through self-adjustment. Fink (2016) identifies four major components that are needed for educative assessment: Forward-looking assessment task. Instead of looking back at what was studied and assessing students’ comprehension, the forward-looking task creates realistic situations from beyond the end of the course and asks students to use what they have learned to address questions or make decisions. Criteria and standards create rubrics. Criteria denote desired qualities of the end product. Criteria have several levels of standards that distinguish an excellent performance from a poor one. Self-assessment. Students should be given opportunities to assess their work themselves. Outside of courses, students will be the first ones to assess their work, so giving them opportunities to practice this strategy is helping them develop this ability. Feedback should be frequent, immediate, based on clear criteria and standards, and communicated in a way that respects students’ integrity and their readiness to learn. 4) Simulation Assessments (Lower-level authenticity): Imitation of a process or a system that uses technology to mimic or replicate a realistic example of an assessed outcome. Simulations replicate or imitate situations, processes or events; used to assess cognitive, affective, psychomotor skills that are observable. Simulations completed by student [Sim is recorded, reviewed for formative feedback, self-assessment, and/or summative assessment by faculty] Example 1: clinical skills in a medical simulation with standardized patients to assess history taking with a patient (entry level), or diagnosis of a patient with back pain (intermediate level). [standardized patient = one trained to depict a patient in a consistent manner] Example 2: Student teachers in a simulated parent/teacher conference with a difficult discussion topic (child behavior problem) Simulation produced by student [demonstrate skill set] Ex: PT exercise with geriatric patient; student video-tapes the “session” teaching the patient a muscular exercise and submits the video to instructor for assessment Mannequins - mannequins programmed with software to mimic events such as a heart attack; students respond to this event with the mannequin; poor decisions with dire consequences can result in “death” of mannequin. Simulations/assessments typically have a checklist of observable behaviors that are being assessed, and level of achievement can range from 0-1, observed or not, on up to a 1-10 likert scale. 5) Performance based assessment: (When they have real-world outcomes) Definition: performance-based assessment “requires students to use high-level thinking to perform, create, or produce something with transferable real-world application” (The School Redesign Network at Stanford University, 2008). Features (Chun, 2010, p. 24): Real-world scenario: Students assume roles in a scenario that is based in the “real world” and contains the types of problems they might need to solve in the future. Authentic, complex process: The scenario reﬂects the complexity and ambiguity of real-world challenges, where there might not be a right or wrong answer, where solutions might not be obvious or given, where information might be conﬂicting or partial, and where there might be competing frameworks or positions from which to view the situation. Higher-order thinking: The task requires students to engage in critical thinking, analytic reasoning, and problem solving. The focus is on analyzing, synthesizing, and applying evidence in order to arrive at a judgment or decision. Authentic performance: The “product” the students create reﬂects what someone assuming that role would produce: a memo, presentation, or other write-up. Transparent evaluation criteria: The learning outcomes drive the creation of the task. 6) Applied Learning Assessments (including service learning): Definition Applied learning includes educational approaches that actively engage students in direct application of skill sets, theories, and models. Application of knowledge is through hands-or and/or real-world settings, creative works or projects, and research that is either independent or directed. Application of gained knowledge from experience is utilized in academic learning. The applied learning activity may occur either inside or outside of the traditional classroom experience and/or be embedded as part of the course experience. This model promotes student success, increased retention and graduation, as well as active engagement in the learning process. Assessment of Applied Learning has been conducted through the use of rubrics and connections between learning context and real-world applications for reflective analysis of learning. Assessment is also conducted through aligning of Applied Learning to local Institutional Learning Outcomes (ILOs). Capstone e-Portfolios additionally capture individualized learning opportunities as they are unique in the scope and curation of artifacts for independent learning examples. Setting clear SMART goals that are inclusive of specific learning outcomes make these educational approaches that are aligned with institutional missions and values as well as programmatic learning outcomes. Within undergraduate research projects, Applied Learning provides opportunities for “real-life” applications, whether through publication, presentations, or project implementation. These opportunities allow faculty to offer intentionally designed curricula that enhance students’ research skills and build those skills over time. Assessment can be collected through institutional data, course-level data, indirect measures such as ogram specific data, student and/or faculty questionnaires, focus groups and interviews, and direct measures of learning such as reflection papers, exams, videos, and student e-portfolios. Authentic embedded assessment tasks include students being asked to demonstrate what they know and are able to do in meaningful ways. Authentic assessment tasks are generally multidimensional and require higher levels of cognitive thinking inclusive of problem solving and critical thinking.