Program Assessment

Learning is a complicated process. Program assessment should reflect this complexity by utilizing a diverse collection of methods. This will provide a more reliable picture of student learning within the program and courses, which will further support the credibility of the findings and the rationale for potential curricular change. For the purposes of assessing student learning, there are three categories of evidence: direct evidence, indirect evidence, and supportive evidence. The assessments in the Swedish Studies example (Planning & Implementation) all constitute direct evidence. Direct evidence reveals what students know and can demonstrate, and is evaluated or assessed in light of student learning outcomes. Student artifacts from course work, such as exams, capstone projects, or portfolios, are examples of direct measures. In all cases, direct evidence involves the evaluation of demonstrations of student learning.  Indirect evidence is not based directly on student academic work but rather on the perceptions of students, alumni, employers, and other outside agents. Artifacts in which students judge their own ability to achieve the learning outcomes are considered indirect evidence. For example, alumni may be asked the extent to which the program prepared them to for their current position. In all cases, indirect evidence is based on perception rather than direct demonstration of learning. Finally, supportive evidence is evidence not directly connected to student learning. Graduation rates, job placement data, faculty-to-student ratios, program promotional materials, and cumulative GPAs could all be included in this category. Ideally, program assessment should include all three types of evidence.

 

UTQAP Connection

UTQAP new program proposals require proponents to describe their plans for program assessment.

 

Recall in the previous example that the Scandinavian Department discovered students were not able to demonstrate research skills at the level they expected by the end of the program. The unit implemented several changes aimed at providing students with more opportunities to develop their research skills in mid-level courses. During the curriculum mapping and analysis stage, the unit determined assessments capable of effectively measuring students’ achievement of the program’s research skills learning outcome. The first thing the department might do is examine the direct evidence of student learning. They could do this by comparing the baseline evidence collected for this learning outcome before the program changes were implemented (i.e. the student assessments specified above) with the same evidence collected from students after the changes were implemented. Has there been improvement in student attainment of the learning outcome?

The unit might also collect indirect evidence in the form of student surveys or interviews aimed at measuring the perceived impact of the changes on student learning. This indirect evidence might help them explain patterns noted during their analysis of student work. Did students feel they were given enough opportunities to practice their research skills? Was there a particular course or learning activity that was particularly helpful?

Finally, the unit might examine supportive evidence such as course enrollment patterns or grade distributions. Are students completing courses in the order intended? Have grade distributions changed in the affected courses? If so, has the change been positive or negative?

The numerous individuals typically involved in data collection and the need for data to be collected over several years as students move through a program adds complexity to the program assessment process. In the Scandinavian Department example, instructors from several courses need to collect student assessments. Additionally, because the department implemented changes in second and third year courses, the impact on students’ achievement of the program learning outcomes cannot be measured until students have completed the program – meaning that units will need to wait at least two years before collecting evidence to examine the impact of these changes. Balancing the need for diverse evidence and the practicality of the evidence collection process can be a challenge, but thoughtful planning can make this significantly easier. Devising an assessment strategy helps to coordinate evidence efforts. When creating a program assessment strategy, consider the following questions:

  • What evidence (direct, indirect and supportive) should be collected to determine whether students can successfully demonstrate each outcome?
  • How will the evidence be collected? Who will be responsible for collecting the evidence? When will the evidence be collected?
  • How will the evidence of student learning be analyzed? Will rubrics or other tools be utilized? What are the criteria for success? When determining the criteria for success, make sure to consider any baseline data that has been collected.
  • How will the results of the program assessment process be reported? What action will be taken as a result of the findings?

Program assessment works best when it is ongoing not episodic. Continuous improvement is best fostered when assessment entails a linked series of activities undertaken over time. A unit might consider assigning the data management role to a faculty member, which would count as service to the department, or engage an RA or staff member to support the effort. It can be helpful to maintain the discussion around assessment by creating a long range plan detailing assessment activities and considering how existing unit governance (e.g., curriculum committees) may support this (e.g., as standing agenda items, or informal annual reports). Situate current efforts in the context of previous and future efforts, and review earlier efforts and actions taken to monitor progress. Make sure to communicate the results of assessment efforts so that lessons learned can be carried forward as those responsible for coordinating assessment and evaluation change over time.