Course Evaluation Institute: Agenda

At-a-Glance Agenda  | Helpful Information

September 20th & 21st, 2023
Faculty Club, St. George Campus, University of Toronto
41 Willcocks St, Toronto, ON M5S 3G3

CEI 2023’s Theme: Emerging Trends in Course Evaluations

The theme of the CEI 2023 is “Emerging Trends in Course Evaluations”. We aim to create a space for course evaluation policymakers, researchers, and practitioners to carry out deep-level dialogues and draw reflections from the shifting landscape of course evaluations and teaching and learning in higher education through the past five years since we last met in person before the pandemic.

Full Agenda

Day 1: Wednesday, September 20th, 2023

  • Professor Susan McCahan, Vice-Provost, Academic Programs and Vice-Provost, Innovations in Undergraduate Education
  • Professor Alison Gibbs, Director of the Center for Teaching Support & Innovation 

Session 1: Going beyond the Course Evaluation tool: the MD program approach to the evaluation of their courses

The process of capturing the effectiveness of courses embedded in an integrated curriculum, like the one at the MD Program in the Temerty Faculty of Medicine, can be complex. We conceptualize the evaluation of a course beyond the course evaluation tool, including analyses of other metrics captured, such as student assessments, compliance with accreditation standards, teacher evaluations, and thematic summaries created based on the qualitative data.

We have adopted a proactive-data approach to our evaluation processes that involved the analysis, interpretation, and summarization of all available data into “summary reports”. Our project studied whether the implementation of data-informed summary reports could help elevate the course evaluation process leading to evidence-driven changes.

We initially faced several program evaluation challenges that needed to be addressed. First, the course evaluation response rate tools were too variable, allowing us to test the effectiveness of different initiatives to address response rate issues. Second, we needed a better process to analyze our qualitative data, which triggered the development of a Natural Language Processing model to help with this task. Last, the teacher evaluation response rates were also an issue, so we designed, piloted, and implemented a standardized “clinical teacher evaluation form” in collaboration with Postgraduate Medicine. Addressing these challenges allowed us to deploy the summary reports as part of our annual course review processes. We collected data regarding the time taken to complete the reports and the responsibility of who completes them; and compared it to practices before the summary reports implementation.

Our data shows that the implementation of a data summary report has reduced the time associated with completing the course evaluation reports, has created greater accountability for Course Directors, and offered the opportunity to close the evaluation loop by showing students how their feedback has helped drive decision making and changes within our program.

Keywords: evaluation frameworks, response rates, data summaries

Takeaways: We hope this presentation offers participants with some tangible evidence of the effectiveness of a variety of initiatives regarding response rates and how to use data to drive decision-making

Dr. David Rojas, Director, Program Evaluation, MD Program, Temerty Faculty of Medicine. David is also an evaluation scientist and an Assistant Professor in the Department of Obstetrics. David has plenty of experience working within the educational and hospital environments and is an evaluation consultant for the Royal College of Physicians of Canada.

Frazer Howard, Senior Analyst in the Office of Assessment and Evaluation at the MD Program, Temerty Faculty of Medicine. Frazer has over 15 years of experience with evaluation systems and practices and is currently leading the data visualization efforts at the MD Program.


Session 2: The Student Learning Experience Survey (SLES): Reimagining and pilot testing a novel approach to course evaluations at McMaster University

Student evaluations of teaching (SET) have long been used in universities to inform high-stakes decisions directly impacting tenure and promotion (T&P) outcomes for instructors (Goos & Salomons, 2017; Gravestock & Gregor-Greenleaf, 2008). In the current Canadian landscape, there is significant tension surrounding, and mistrust in the validity and use of SET tools and data (Grignon et al., 2019; Hum, Turner, & Newton, 2021; Spooren, Brockx, & Mortelmans, 2013) – particularly in the wake of the now often-cited Ryerson Decision (Ryerson University v Ryerson Faculty Association, 2018). There is a common misconception that the Ryerson Decision was an indictment of SET, however, the arbitrator’s decision recognizes that well-conducted SET is possible and can be effective (G. Hum, personal communication, September 17, 2021).

At McMaster University, we are engaging in conversations and efforts to transform the culture around valuing and evaluating teaching – this includes moving SET from a primary means of evidence in T&P to a tool for the instructor’s professional development (PD) journey. Envisioning a journey that includes ongoing interaction with students about their learning experiences (LE) and identifying ways to integrate the instructor PD, student LE, and marginalized voices toward improving teaching and learning.

The author will share insights into the process of developing and refining the Student Learning Experience Survey (SLES) with a focus on year 2 of the pilot. The SLES is a SET measure that focuses on the PD journey of instructors and integrates the student LE and marginalized voices while also helping students to reflect on their own active engagement in their learning. Year 2 pilot insights include 1) faculty association and community consultations feedback on the SET survey; 2) data from instructors and students that have engaged with the mid- and end-of-term administrations of the pilot measure; and 3) preliminary recommendations for making meaningful change.

Keywords: student evaluations of teaching (SET); course evaluations; holistic evaluation of teaching; professional development of teaching; student learning experience

Takeaways: The research literature and this study suggests that the evaluation of teaching needs to:

  • Take a holistic view of instructor and student learning experiences.
  • Take a developmental approach to
    • Instructor professional development of teaching practice
    • Student learning development/experience
  • Reduce reliance on SET as the key evidence in high-stakes personnel decisions.
    • Develop a meaningful portfolio of evidence and reflection that values supporting development.
  • Reframe SET as an ongoing conversation between instructors and students about how to make learning meaningful for all involved, rather than a single, quantitative data point lacking context.
  • Place this feedback tool at the midterm (formative) over the end-of-term (summative) to make space for iterative engagement between students and instructors – to support the PD and student LE focus/purpose.

Dr. Amanda Kelly Ferguson, Education Developer, McMaster University. In the role of Post-Doctoral Fellow, and now Educational Developer, Amanda Kelly Ferguson has employed community-engaged scholarship to develop and pilot test a Student Learning Experience Survey that reimagines the structure, implementation, and use of student feedback about their course experience.

 

Track 1: EDIA – Wedgewood Room

Session 1: An Analysis of Winter 2022 Student Course Perception (SCP) Survey Responses at the University of Waterloo.

In this presentation we share findings from our analysis of Winter 2022 data from the Student Course Perception (SCP) survey recently implemented campus-wide at the University of Waterloo. Drawing together instructor identity data from the University Equity Survey – the first-ever activity to capture identity data at Waterloo – and SCP numerical data, this analysis examined how instructor identity and course characteristics impact SCP scores. Specifically, this analysis explored the strength of associations between SCP ratings and 1) Instructor-Level Variables (e.g., Indigenous identity, racial identity, sex, appointment type, and instructor time in Canada), and 2) Course-Level Variables (e.g., class size, course type i.e., online or in-person, and Faculty of course offering). We share and reflect on the following key findings:

  • Overall, we found very small but statistically significant differences in mean ratings between white and racialized instructors for two of six response items: “The instructor(s) helped me understand the course concepts,” and “The instructor(s) stimulated my interest in this course.”
  • There was some evidence to suggest the differences in scores for white and racialized instructors may be explained by time spent in Canada, a proxy variable we used to examine possible language and/or cultural biases being associated with lower scores assigned by students. This suggests that less time spent in Canada may contribute to slightly lower SCP scores.
  • The overall difference in mean ratings assigned by students to male and female instructors was not statistically significant accounting for class type, course size, or instructor appointment type, but students in classes of 101-200 students assigned slightly higher ratings in 5/6 response items to male instructors appointed as lecturers as compared to female instructors appointed as lecturers.

We conclude the presentation with some questions/reflections for group discussion.

Keywords: Equity, Sex, Bias, Mean Ratings, Student Surveys

Takeaways: Research on bias in Student evaluation scores for racialized instructors is scarce. This presentation shares results from a large study that looks at how instructor identity and course characteristics impact course evaluation scores. This presentation describes the findings of an in-depth quantitative analysis of several forms of bias that have been identified in the literature on student evaluations, including sex and race, in a Canadian context.

Dr. Sonya Buffone, Director, Teaching Assessment Processes, University of Waterloo
A University of Waterloo alumni, Sonya Buffone began her Ph.D. journey in 2010, stepping into her career as Director of Teaching Assessment Processes in 2018. Sonya is a proactive, results-oriented social scientist with experience designing, managing, and executing large-scale change management in a post-secondary institution and has over ten years of experience in diverse research settings. Sonya sees her role as an advocate for both student and instructor voices. When Sonya isn’t thinking about all things related to holistic teaching assessment you can find her hiking in the woods or on the beach with her two daughters and their dog, Loki.

Kathy Becker, Specialist, Teaching Assessment Processes, University of Waterloo
Kathy is a University of Waterloo Arts grad (English Literature) who completed a master’s in educational technology through the University of British Columbia. After several years in the private education sector, she returned to Waterloo in 2011. Kathy is committed to helping instructors find joy in their teaching, and views assessment as a key tool in this discovery process. When she’s not busy at work, you’ll find her caring for her bees, working on a construction project with her partner, or learning something new.

Dr. David DeVidi, Associate Vice President, Academic, University of Waterloo
After completing a PhD at the University of Western Ontario, Dave joined Waterloo as a postdoc in 1994, and as an Assistant Professor in the Department of Philosophy in 1996. His research is primarily in logic, philosophy of mathematics and philosophy of disability. In various roles on campus (instructor, president of the faculty association, department chair, associate vice-president academic) he has seen assessments of teaching from many angles. While he used to try to be a successful athlete, he now pursues hobbies where he can have no expectation of ever being any good such as running, cycling, and reading novels.

 

Track 2: Establishing Reliability and Validity – Main Lounge

Session 1: Longitudinal Analyses of Student Feedback to Enhance Faculty Engagement with and Informed Interpretation of the Data.

A major component of building trust in data is to ensure that users of the data understand them well. Though many research studies exist to improve our understanding of the nature of course feedback generally, arguments can be made that different survey designs and institutional contexts can make it difficult to know how the research may apply in specific local contexts. The University of Saskatchewan is currently undergoing a comprehensive longitudinal analysis of student course feedback. The two-fold purpose of this research is

  1. to explore student feedback with a view to identifying trends and patterns present in the data, and
  2. to enhance faculty engagement with and informed interpretation of the data in collegial processes (e.g., tenure, promotion, merit).

There are 11 research questions guiding this analysis, with the intent to identify meaningful differences in student responses that can inform guidance in the interpretation and build trust in our data. For example, we intend to explore the extent to which a variety of factors such as instructor gender, student gender, response rate, class size, course discipline, student grades, instructor’s role (e.g., tenure-track, sessional) and course level affect student responses in course feedback. We also plan to compare 2022-2023 feedback to pre-pandemic feedback and compare pandemic feedback to non-pandemic feedback. This presentation will focus on the factors that led us to begin this study, the work done to prepare data and design the research questions, and the manner in which we hope to apply the results once the analysis is complete. We will create evidence-informed guidance for faculty and academic leaders as they interpret student feedback as one part of a portfolio of evidence of teaching effectiveness, moving toward more holistic and equitable processes in teaching evaluation.

Keywords: Bias, Validity, Interpretation, Guideline Development, Trustworthiness

Takeaways:

  • Learn about our data-informed approach to creating guidelines for valid interpretation of student feedback by faculty and academic leaders.
  • How we intend to continue changing culture away from perceiving student feedback as directly evaluative by naming and acknowledging specific student response patterns outside an instructor’s control.
  • Comprehensive statistical analysis as a method to improve trust in student feedback by confirming or contradicting findings from published research in our local context.

David Greaves, Teaching and Learning Enhancement Specialist, University of Saskatchewan
David has been working at the University of Saskatchewan since 2015, currently as the Teaching and Learning Enhancement Specialist. In this role, David leads the Teaching and Learning Enhancement team within the Gwenna Moss Centre for Teaching and Learning. This team supports faculty and academic leaders in using data to make meaningful decisions to enhance teaching, including the use of learning analytics, supporting peer reviews of teaching practices, and running the Student Learning Experience Questionnaire (SLEQ).

 

Session 2: Testing and Visualizing the Dimensionality of Rating Scale Data.

To promote the legitimacy of the course evaluation survey process among faculty and other stakeholders, it is important for the rating scale data to capture multiple dimensions of teaching effectiveness. These dimensions include but are not limited to, Clarity and Organization, Interaction and Rapport, Grading Fairness, and Student Engagement. This presentation will demonstrate how to use a statistical method called Factor Analysis to test the psychometric dimensionality of course evaluation rating scale data. Factor Analysis can be used to analyze pilot data from a new survey, or existing data from a longstanding survey. This presentation will also demonstrate how the institution visualizes the dimensional structure of its data in Blue reports in a way that can be easily understood by non-technical report viewers. Although presentation attendees will not need to have significant experience in statistics, a basic understanding of correlations and linear regression is recommended.

Keywords: Factor Analysis, Psychometrics, Reporting

Takeaways: Attendees cannot be expected to perform a factor analysis right after seeing this presentation, but they should at least be able to recognize if their data are suitable for factor analysis and to recognize when factor analysis is used in research papers.

Kenneth Tsang, Senior Research Analyst, Villanova University
For the last six years, Kenneth has worked for the Office of Strategic Planning and Institutional Effectiveness, where he manages data warehousing, reporting, and policy analysis for course evaluation survey data.

 

Track 3: Leveraging Technology – Fairley Lounge

Session 1: How UC Berkeley Created a Custom Web App for Managing Course Evaluations Data. 

Every semester UC Berkeley sends hundreds of thousands of online evaluations to students. In 2022 a group of developers, Blue admins, and test users at UC Berkeley worked to build a custom web app for managing course evaluation data in order to facilitate the administration of those evaluations. As the interface between Berkeley’s Student Information System and Blue, this web app replaced a convoluted process for sharing, editing, and structuring Blue data (using various spreadsheets and the Google API) that became increasingly hard to manage as the scale and complexity of online course evaluations increased on campus. The team worked to create a replacement solution that would simplify access, maintain data integrity, and prioritize usability and scalability.

The new app manages user access in one convenient location, displays course and instructor information in an easily readable format, allows department administrators to customize evaluation content and timing, and gets nightly updates of course data from Berkeley’s enterprise data lake (EDL). A user can apply different statuses or use filtering to easily find and/or edit specific data. Various error messages and warnings displayed to a user ensure that only valid data is ever sent to Blue. The Blue admin is able to send reminder emails to large groups of users and can easily access an overview of all user activities, such as which departments still need another reminder about their evaluations, where there are errors in the data that need to be corrected, and the most recent changes made to the data, among other things. The data management app also generates the specific data files needed to import into Blue. The app launched in Fall 2022 and has received lots of positive feedback from users. One user said that this app made it “the fastest ever for me to process EECS [data].”

This presentation will discuss the process for designing, building, and testing the app and how the team integrated campus requirements alongside user feedback regarding course evaluations into the tool.

Keywords: data management, web interface, data integrity, scalability, usability

Takeaway:

  • The resources (staff, time, etc.) and amount of thoughtful consideration required to design a good web application for course evaluations data management.
  • The pros and cons of various course evaluation customizations (i.e. customizable course evaluation timing. Pro: higher instructor satisfaction with course evaluations. Con: higher chance of errors in the data controlling when evaluations start/end)

Becca Long, Course Evaluations Service Lead, University of California, Berkeley

 

Session 2: Course Evaluation Middleware: Bringing Efficiency to CE Workflows. 

At U of T, we use Explorance Blue as our online course evaluation system. That means we need to import a great deal of U of T data (courses, students, registrations) into Blue. Of course, that data does not come out of U of T’s systems in a format that Blue can use. After years of working with time-consuming and error-prone manual processes for formatting and importing data, we decided to build a middleware, a Python script and SQL database which will automatically prepare our data for use by Blue. This presentation will cover the techniques we used and the lessons we learned in developing the middleware and will include an explanation and demonstration of how our middleware works. It is hoped that our experience will help you to see ways you might manage the data input and data output processes more efficiently and reduce the amount of manual data management your staff must do.

Keywords: efficiency, data management, data formatting, Blue

Takeaway: Don’t let entrenched procedures determine the future direction of your work. Take advantage of technology that can reduce manual labour, freeing up your people to do work of higher value.

Paul Steacy, Evaluation Systems Information Analyst, University of Toronto
Paul works with evaluation data to produce meaningful information that meets the needs of our stakeholders, and to improve our own workflows. Before coming to this role, Paul worked for almost twenty years at U of T as an Instructional Technology Analyst, designing, implementing, and supporting database-driven academic and administrative applications. He also has experience as a front-line advisor supporting U of T’s Academic Toolbox, as a communications coordinator, and as a trainer of staff, faculty, and students in the use of various digital technologies.

 

Taking a Scholarly Approach to Transform Student Feedback on Teaching.

UBC recently conducted its first comprehensive review of student evaluations of teaching. These high-stakes surveys are used for improvement efforts and as part of the process to determine hiring, promotion, tenure, and teaching awards. Through a comprehensive literature review, focus groups, cognitive interviews, and psychometric analyses, we’ve transformed the UBC processes for gathering student feedback. In this session, we will share the process we took to solicit student and faculty feedback to revise the survey questions with the aim of reducing the potential for bias, and ambiguity in the terms or phrases used, and positioning questions as student-centred. We will also discuss our approach to engaging our campus communities about the validity and reliability of our student feedback moving forward. These questions are an integral component of an integrative evaluation of the teaching framework at UBC that is being designed to support the overall assessment of teaching effectiveness. UBC has also begun to report different metrics on student evaluation of teaching data; the metrics are interpolated median, percent favourable, and dispersion index. To support the transition to these metrics and the integrative evaluation of the teaching framework, UBC is also developing a dashboard on student evaluations of teaching survey data to provide individual instructors and academic leaders with a way to visualize the reported metrics within context. We will be presenting this dashboard as part of the interactive workshop.

Keywords: Survey Design; Response Bias; Cognitive Interviews; Student-Centred Questions; Reporting

Takeaways:

  • We would like participants to understand the steps we took to provide a comprehensive review of student evaluations of teaching and consider a similar project within their own institutions.
  • A better understanding of how using the more holistic integrated evaluation approach and introducing the new metrics (interpolated median, percent favourable, and dispersion index) provides additional information for instructors for feedback/improvement and for assessment.

Dr. Stephanie McKeown, Chief Institutional Research Officer, University of British Columbia
Stephanie is responsible for the team that administers and analyzes the student experience of instruction surveys. She is also the current President of the Canadian Institutional Research and Planning Association.

Dr. Abdel Azim Zumrawi, Statistician, Planning and Institutional Research Office and former adjunct professor in the Faculty of Forestry, University of British Columbia
Abdel possesses extensive experience conducting a multitude of statistical analyses on course evaluation data providing valuable insights for the university.

Alison Wong, Project Manager, University of British Columbia
Alison leads the Student Experience of Instruction survey team within the Planning and Institutional Research Office at UBC. She takes charge of the project’s technical aspects, ensuring seamless execution and maximizing the team’s efficiency in delivering the student experience surveys at UBC.

Dr. Brad Wuetherick, Associate Provost, Academic Programs Teaching and Learning, University of British Columbia Okanagan
Brad started his role at UBC in May 2021. He previously served (from 2013 to 2021) as the Executive Director for Learning and Teaching in the Office of the Provost and VP Academic at Dalhousie University, where he was responsible for student evaluations.

 

A Multi-Institution Discussion on Course Evaluations in the Framework of EDIA (Equity, Diversity, Inclusivity and Accessibility)

Course evaluations play an important role in a faculty member’s career, especially for those who are pre-tenured or precariously employed. This can be especially problematic for faculty who may experience bias from students in their course evaluation responses. For example, as the research of Boring and Philippe (2019) shows, gender bias does occur, but can be mitigated by educating students and making them more self-aware of unconscious bias. How can we better understand the needs of faculty who may have a marginalized identity and experience bias (conscious or unconscious) in teaching evaluations? How do we ensure students are offered a voice in their learning in a constructive manner? What role should course evaluations play in our current educational environment?  

In this panel we invite a faculty member, a course evaluation practitioner and two leaders from a teaching and learning center and an institutional research office from four Canadian research universities, University of British Columbia, Queens’ University, University of Saskatchewan, and University of Toronto, to share their experiences and research work in this area. They will also offer insight to help us better understand the complexity of bias and strategies to mitigate students’ implicit and explicit biases in the student evaluation of teaching. 

The panel will feature discussion around four major themes including: 1) Detecting Bias; 2) Addressing Bias; 3) Assessment of teaching; and 4) Student Voice.  

Facilitator: 

Jasjit Sangha, PhD is Faculty Liaison Coordinator, Anti-Racist Pedagogies, at The University of Toronto. In her current role she brings comprehensive experience to her work with faculty through her background in student development, adult learning, mindfulness, equity and inclusion. Her knowledge of teaching and learning is also informed by her work as an instructor teaching at the undergraduate and graduate level, teaching courses in Sociology, and Education. She has also studied the neuroscience of mindfulness and self-compassion which builds on the personal mindfulness practice she has cultivated for almost two decades. She brings this perspective to her equity work through creating spaces for dialogue on how to create equitable classrooms and contribute to change on campus.  

Panelists:

Dr. Yasmine Djerbal (she/her) Associate Director at the Centre for Teaching and Learning at Queen’s University, oversees a team of educational developers to promote equity focused, research-informed and evidence-based strategies in teaching and learning. Before moving to this role, Yasmine worked as an Educational Developer in Anti-Racist Pedagogies and Inclusion, where she guided educators with the design and implementation of university-wide strategic plans for inclusive teaching and collaborated on various projects to create anti-racist and inclusive curricula, content, and learning environments. Dr. Djerbal holds an MA in Gender Studies and a PhD in Cultural Studies from Queen’s University. 

David Greaves, Teaching and Learning Enhancement Specialist, University of Saskatchewan has been working at the University of Saskatchewan since 2015, currently as the Teaching and Learning Enhancement Specialist. In this role, David leads the Teaching and Learning Enhancement team within the Gwenna Moss Centre for Teaching and Learning. This team supports faculty and academic leaders in using data to make meaningful decisions to enhance teaching, including the use of learning analytics, supporting peer reviews of teaching practices, and running the Student Learning Experience Questionnaire (SLEQ).  

Dr. Stephanie McKeown, Chief Institutional Research Officer, University of British Columbia is responsible for the team that administers and analyzes the student experience of instruction surveys. She is also the current President of the Canadian Institutional Research and Planning Association 

Dr. Kosha Bramesfeld, Associate Professor, Teaching Stream, in the Department of Psychology at the University of Toronto Scarborough (UTSC). Kosha received her Ph.D. in social psychology from The Pennsylvania State University and has over 15 years experience as a faculty member, educational developer, and equity scholar. Between 2016-2018, Kosha worked as a Data Analyst for the Course Evaluations Team at the Centre for Teaching Support and Innovation at the University of Toronto (2016-2018). In her work, Kosha seeks to empower learners by creating learning environments that are inclusive, collaborative, authentic, formative, and supportive.   

Defining Teaching Effectiveness for Course Evaluations and Other Methods of Teaching Assessment.

The University of Waterloo’s teaching assessment processes are undergoing a profound shift toward a holistic model that encompasses Student Course Perceptions, Peer Review of Teaching, and Teaching Dossiers, and that takes into account specialized needs relating to student supervision. This new system is being developed with an eye to the research literature, Waterloo-specific research, consultations with campus stakeholders, and the experiences of other Canadian universities. 

At the heart of our holistic model for teaching assessment at UW is the Teaching Effectiveness framework, which was developed over the course of 8 years. Each of our teaching assessment methods, including our student course perception survey questions, is informed by the teaching effectiveness framework. This framework establishes a goal post for the kind of teaching we want to see, including: 

  • What matters for instructors at Waterloo with respect to teaching?
  • What matters for students at Waterloo with respect to teaching?
  • What does the literature say matters with respect to effective teaching?
  • What does Waterloo need to strive for to be the leading institution for teaching effectiveness in Canada? 

In this panel we share our process for developing a teaching effectiveness framework and how it informs our Student Course Perception Survey. In addition, we explain how we use this framework as a stepping-stone to develop additional tiers of questions for our cascaded survey model across all six UW Faculties. We will discuss the challenges and successes we have had as we have navigated development and continue with the implementation of this institutional framework to define teaching effectiveness.   

Keywords: Teaching effectiveness, holistic assessment, cascaded student course perceptions surveys 

Takeaways: This panel discussion will focus on the challenges associated with defining teaching effectiveness in post-secondary institutions. Participants will reflect on the importance of understanding the institutional context in defining key teaching and learning priorities. Additionally, we will discuss how a teaching effectiveness framework can be used to leverage buy-in from institutional stakeholders throughout the implementation of various assessment methods. 

Facilitator: Christie Barron, Data Analyst, Course Evaluations at CTSI, University of Toronto.
Christie is a Ph.D. candidate in Educational Psychology, OISE, University of Toronto. Christie has six years of experience conducting operational psychometric scoring and research for technical reports, academic articles, and conferences. Christie’s academic research specializes in the development, validation, and interpretation of automated language assessments. Her SSHRC-funded research has been published in journals such as Language Testing, Early Childhood Research Quarterly, and Frontiers in Education. Christie also has teaching and consulting experience as a TA, psychometrics workshop instructor, and Software Carpentries instructor. When not nerding out about data analytics, Christie can often be found reading, running, or playing music. 
 

Panelists: Dr. Sonya Buffone, Director, Teaching Assessment Processes, University of Waterloo 
A University of Waterloo alumni, Sonya Buffone began her Ph.D. journey in 2010, stepping into her career as Director of Teaching Assessment Processes in 2018. Sonya is a proactive, results-oriented social scientist with experience designing, managing, and executing large-scale change management in a post-secondary institution and has over ten years of experience in diverse research settings. Sonya sees her role as an advocate for both student and instructor voices. When Sonya isn’t thinking about all things related to holistic teaching assessment you can find her hiking in the woods or on the beach with her two daughters and their dog, Loki. 

Kathy Becker, Specialist, Teaching Assessment Processes, University of Waterloo 
Kathy is a University of Waterloo Arts grad (English Literature) who completed a master’s in educational technology through the University of British Columbia. After several years in the private education sector, she returned to Waterloo in 2011. Kathy is committed to helping instructors find joy in their teaching, and views assessment as a key tool in this discovery process. When she’s not busy at work, you’ll find her caring for her bees, working on a construction project with her partner, or learning something new. 

Dr. David DeVidi, Associate Vice President, Academic, University of Waterloo 
After completing a PhD at the University of Western Ontario, Dave joined Waterloo as a postdoc in 1994, and as an Assistant Professor in the Department of Philosophy in 1996. His research is primarily in logic, philosophy of mathematics and philosophy of disability. In various roles on campus (instructor, president of the faculty association, department chair, associate vice-president academic) he has seen assessments of teaching from many angles. While he used to try to be a successful athlete, he now pursues hobbies where he can have no expectation of ever being any good such as running, cycling, and reading novels. 

Fearless Feedback: Amplifying the Voice of the Student with AI

Institutions across the world are constantly seeking ways to understand and better support their students. Through its proprietary prescriptive analytics capabilities, Explorance BlueML equips institutions with the capability to gain specific insights into each aspect of the student educational experience. It is, today, the only AI-Powered Voice of the Student (VoS) solution in the market catered to academic leaders and administrators trained to analyze qualitative data from any source (internal surveys, review sites, and social media) in terms that they can fast act upon. Join us for a live demo of Explorance BlueML and to learn more about our approach to Artificial Intelligence in driving higher rates of student graduation and success. 

Keywords: Machine Learning, Artificial Intelligence, Qualitative Analysis, Feedback Comment Analysis, Course Evaluations 

Samer Saab, Founder and CEO of Explorance 
Samer Saab is the Founder and CEO of Explorance, a globally recognized leader in Feedback Analytics Solutions. As a leadership-obsessed CEO, Samer is dedicated to combining business and technology through a human-centric approach. He currently sits on the Board of Directors for the Association Québécoise des Technologies and the International Advisory Board for the Desautels Faculty of Management at McGill University. Samer joined the World Economic Forum’s New Champions Community in late 2022, helping with its Skills Gap Initiative. In 2022, Samer published ‘A Founder’s Leadership Diaries,’ a monthly newsletter on LinkedIn that discusses his personal experiences and thoughts on leadership.  

Day 2: Thursday, September 21st, 2023

Session 1: A Qualitative Analysis of Open-Ended Comments in Student Course Perception Surveys: The Good, the Bad and the Ugly. 

In this presentation, we will share preliminary findings from our institutional analysis of the open-ended comments on the Student Course Perception (SCP) surveys at the University of Waterloo from Winter 2023. This analysis includes qualitative coding of 44,732 student comments using NVivo. The primary aim of the research is to identify student comments that are deemed to be inappropriate. An inappropriate comment is defined as concerning, or abusive on some protected ground (racial, gender, sexuality etc.). This definition is intentionally broad as we recognize the potential for various types of inappropriate comments to emerge from the data. As this analysis is still in-progress we provide some preliminary findings related to the following key questions: 

1) What is the underlying nature of concerning comments?  

  1. Are they attributed to the student being at fault?  
  2. Are they attributed to the instructor being at fault?   
  3. What are the most common issues with open-ended responses?  

2) What is the underlying nature of the comment if it is deemed to be inappropriate?  

  1. Inappropriate/offensive? How?  
  2. Bullying/harassment   
  3. Discrimination (racial, ethnic, sexuality, gender identity etc.)  
  4. Health and safety concerns (threats, including those to self or others)  

3) What (if any) features do inappropriate comments possess that make them unique in comparison to appropriate comments?  

  1. Are they textually longer?   
  2. Are they more likely to include offensive language?  

In this presentation, we also share and reflect on some methodological challenges we have faced while conducting this research. We conclude the presentation with some questions for group discussion.  

Keywords: Abusive student comments, Open-ended survey comments, Qualitative research 

Takeaways: This presentation sheds light on the key methodological issues underlying a qualitative analysis of open-ended comments on course evaluations. The body of research exploring open-ended responses on course evaluations is relatively small. This presentation shares preliminary findings from a large-scale Canadian study. 

Dr. Sonya Buffone, Director, Teaching Assessment Processes, University of Waterloo 
A University of Waterloo alumni, Sonya Buffone began her PhD journey in 2010, stepping into her career as Director of Teaching Assessment Processes in 2018. Sonya is a proactive, results-oriented social scientist with experience designing, managing, and executing large-scale change management in a post-secondary institution and has over ten years of experience in diverse research settings. Sonya sees her role as an advocate for both student and instructor voices. When Sonya isn’t thinking about all things related to holistic teaching assessment you can find her hiking in the woods or on the beach with her two daughters and their dog, Loki. 

Kathy Becker, Specialist, Teaching Assessment Processes, University of Waterloo 
Kathy is a University of Waterloo Arts grad (English Literature) who completed a master’s in educational technology through the University of British Columbia. After several years in the private education sector, she returned to Waterloo in 2011. Kathy is committed to helping instructors find joy in their teaching, and views assessment as a key tool in this discovery process. When she’s not busy at work, you’ll find her caring for her bees, working on a construction project with her partner, or learning something new. 

 

Session 2: Leveraging data science to analyze and interpret qualitative data from student evaluations of teaching. 

Qualitative data from student evaluations of teaching (SET) could be immensely helpful in improving the teaching and learning experience by providing rich information and contextualizing the data. Qualitative data allows for a feedback loop, whereby students provide feedback, instructors act on it, and students feel their voices are being heard. However, qualitative data are rarely used, because it takes substantial time, effort, and expertise to interpret and extract useful information. 

Our university has adopted a survey platform for SET that allows for a small degree of qualitative data mining and sentiment analysis. However, the survey platform lacks the capability for thematic analysis. Thus, expertise in data science is needed to summarize and interpret qualitative data. 

We are designing a tool that makes use of natural language processing and machine learning to provide a semi-automated analysis of qualitative SET data. This presentation will showcase our approach to constructing our tool. First, we will explain our datasets based on midterm survey data for large-enrollment first-year courses and our approaches to hand-coding the data using thematic analysis. Next, we will describe how we have built and fine-tuned topic models using techniques in data science. Finally, we will highlight various data visualization tools to help faculty make sense of the qualitative analysis. 

If we are successful in creating this tool, we have an opportunity to revolutionize the method by which students and instructors give and receive qualitative feedback, and how teaching is evaluated at higher education institutions. 

Keywords: Natural language processing, Machine learning, Qualitative analysis, Data science 

Takeaways:  

  • How to hand-code data using thematic analysis 
  • How to build and fine-tune topic models using data science techniques 
  • How to visualize qualitative data 

Kevin Hong, Undergraduate student, Department of Mathematics and Statistics, McMaster University
Kevin’s research interest includes statistical and computational challenges in modern datasets. 

Dr. Caroline Junkins, Assistant Professor, Department of Mathematics and Statistics, McMaster University
Caroline teaches first-year and second-year courses and directs the McMaster Peer-Run Inclusive Math Experience. Her research interests include active learning and data-driven interventions.  

Dr. Pratheepa Jeganathan, Assistant Professor (tenure-track), Department of Mathematics and Statistics & an associate member, School of Computational Science and Engineering, McMaster University. 
Pratheepa’s research focuses on statistical and computational challenges in modern datasets. She teaches undergraduate and graduate courses in statistics and data science. 

Dr. Sharonna Greenberg, Assistant Professor, Department of Chemistry and Chemical Biology, McMaster University
Sharonna teaches first-year and second-year courses and coordinates outreach and mentorship events. Her research interests include creating new technologies and assessment methods. 

Dr. Amanda Kelly Ferguson, Educational Developer, McMaster University
In the roles of Post-Doctoral Fellow, and now Educational Developer, Amanda Kelly Ferguson has employed community-engaged scholarship to develop and pilot test a Student Learning Experience Survey that reimagines the structure, implementation, and use of student feedback about their course experience. 

 

Session 3: Mapping course sentiments to instructional elements to gain a nuanced understanding of student feedback. 

Surveys contain open-ended questions that are useful for providing qualitative information that complement quantitative data. Open-ended questions may provide insight into the reasons an individual answered a particular way in a survey, which would be difficult to measure using close-ended questions. Evaluating the common themes across hundreds of open-ended questions and quantifying these themes is difficult to conduct manually, but important for understanding the drivers of satisfaction and dissatisfaction in a student population. By quantifying these data, we can understand common themes within a course section, or across a department or faculty. Overall, open-ended questions allow us to understand the opinions, experiences, and perspectives of survey respondents better.  

An important and useful aspect of open-ended responses is understanding how different elements of a course are perceived by the student population. With student feedback, we can understand the elements of a course, department, or faculty that could be refined to enhance the student learning experience. Understanding these relationships requires a combination of identifying the nouns in a survey response and the sentiments associated with those elements. In this work, we seek to evaluate noun-sentiment pairings within open-ended questions. Using dependency parsing to understand the relationships between words in a sentence and part-of-speech tagging to identify the grammatical category of a term, we can identify the noun-sentiment pairs contained in open-ended questions. The results of these methods yield deeper insights into the perspectives of the student population and can guide instructors to improve the teaching and learning experience in the course. 

Keywords: Natural language processing, sentiment analysis, relationship extraction, text analytics dashboarding 

Takeaways: Identifying noun-sentiment pairs in course evaluations yields deeper insights into the perspectives of the student population. 

Arman Edalatmand, McMaster University
Arman recently completed his master’s degree in Biochemistry and Biomedical Sciences at McMaster University, where he leveraged natural language processing methods to contextualize and better understand antimicrobial resistance genes in bacteria by mining biomedical literature. Currently, Arman works for the Office of Institutional Research and Analysis at McMaster on projects that aim to extract meaningful insights from large volumes of textual data. 

 

Leverage Design Thinking Methods to Up Your Game in Visualizing Course Evaluation Data 

Design thinking is a user-centred, creative, and collaborative problem-solving methodology. In the first 30 minutes, this workshop will start with an overview of the history and evolution of design thinking, a discussion about where we can find the best innovation opportunities and an introduction of the 5-stage framework of design thinking: emphasize, define, ideate, prototype and test.  

We will then provide examples of how we applied design thinking in our own data visualization projects (Learning Analytics Dashboards at UofT and a Prototype of Course Evaluation Dashboards for one division). As design thinking is an iterative process, it took multiple revisions until we met the goals of our projects. We will draw from our own reflections and highlight the lessons we learned from our journey by sharing the pitfalls and mistakes we made. We learned that building engaging and useful dashboards involves numerous steps and sophisticated data and design skills, including a well-mapped process and several key skills including applying best practices in data visualization, analytical design thinking and implementation knowledge, which cannot all be covered in a 90-minute workshop.  

The second part of the workshop is devoted to hands-on practice. We will provide a practice dataset (course evaluation data) and divide participants of different levels of expertise into small groups of 4-6 to engage in design thinking activities that will last 30 minutes. We will then do a step-by-step demo of creating a course evaluation dashboard in both Tableau Desktop and PowerBI. Then we will host a 10-minute Q&A session at the end.  

This workshop is designed to help you establish a basic understanding of what it takes to create simple and effective dashboards so that you can continue to learn and help inform your organization and drive intelligent decisions and operations by using evaluation data.  

Keywords: Design Thinking, Data Visualization, Data Storytelling, User-centered approach 

Learning Objective: 

  • Identify data visualization and storytelling opportunities in the course evaluation area and learn a few common types of charts to present quantitative information effectively. 
  • A quick starter on the Design Thinking framework and applying that in your data visualization projects. 
  • Feel confident in designing data visualization and engaging your audience with data storytelling. 

Dr. Yuxin Tu, Senior Manager, Evaluation & Assessment, Center for Teaching Support and Innovation, University of Toronto  

Yuxin leads two cross-functional teams, Course Evaluations (operation and research) and SoTL (educational development). She has extensive experience in delivering workshops and consultations in quantitative data analysis, psychometrics, survey design and data visualization. She has been an advocate of incorporating UX design in data visualization to achieve an engaging user experience. She recently founded the Data Viz Club, which is a platform for cross-team collaborations within CTSI and beyond and for enhancing team members’ expertise in creating data visualization.  

Dr. Alan Fleck, Data Analyst, Learning Analytics, University of Toronto  
Alan is responsible for the development of the dashboards and data analyses within the Learning Analytics Initiative, which aims to understand students’ course-level engagement on Quercus and support course design. He has extensive experience in data analytics, statistical analysis, data visualization and stakeholder consultations. Alan is also a member of the Data Viz Club. 

 

More Than ‘Just Another Dashboard’ 

Sheridan’s integrated suite of intelligence tools has transformed the institution’s ability to consistently make quality decisions in high-stakes situations. In the post-pandemic era of change and disruption, this work has tangibly improved the speed and value of insight delivery and action. Beautifully designed and built on myriad interconnected data sets and custom predictive models, the suite covers all aspects of the institution’s business, allowing users to quickly identify, contextualize and action issues and opportunities. In this presentation we focus specifically on our custom course evaluation dashboard, reviewing concrete examples of how the tool enables swift analysis, comparisons, and action, and demonstrating how we delivered on our promise to accelerate decision-making with more than ‘just another dashboard.’ 

Dashboards are old news, and in a post-pandemic context where big decisions can’t be avoided, they often aren’t enough. The trope of using dashboards to uncover ‘actionable insights’ can ring hollow when leaders looking for fast answers to complex questions are required to work, or be walked, through cumbersome multitudes of data visualizations that still may not provide critical contextualizing information needed to understand a situation and make the right call.  

Sheridan has spent over seven years addressing this problem, the result of which is a suite of beautifully designed interconnected Tableau dashboards (on Tableau Server) used by hundreds of decision-makers throughout the institution, from professor to provost. In this session we walk through our course evaluation diagnostic dashboard – what it took to develop and concrete examples demonstrating its value. Participants will learn how the tool allows for / incorporates… 

  • A quick understanding of student sentiment at any level of aggregation 
  • Built-in multivariate modelling allowing meaningful comparisons at the professor level 
  • Baked-in validity checks to ensure you know how generalizable a result is 
  • Human-centered and insight-driven design – baked-in stat testing immediately highlights real differences

Keywords: Dashboard, Analysis, Predictive Analytics, Decision-making, Design 

Takeaways: 

  • Participants in this session will come away with an understanding of the resources/investment required to build such a tool. 
  • Participants in this session will learn how we contextualize our data within the dashboard for more meaningful comparisons, interpretation, and action.

Dean Langan, Senior Manager, Institutional Research, Sheridan College 
Working through the start-up phase with two successful organizations in the research/consulting space, Dean honed his skill at identifying client/consumer needs, developing research tools/products to meet them, and communicating their benefits. Starting with the implementation and management of a new, institution-wide course evaluation framework, he has put those skills to good use at Sheridan. In addition to leading all experience assessment and related consulting at the institution, he and his team have a mandate to find and fill knowledge, process or product gaps related to stakeholder experience management, most recently developing from scratch a Student Experience Survey which was licensed by half – twelve – of the colleges in Ontario. Dean has an MA in Cultural Anthropology. 

 

Making Sense of Student Feedback: Metrics for a more Meaningful Interrogation of SEI Data 

Student Experience of Instruction (SEI) quantitative data often consists of responses on a Likert-type scale (most commonly a 5 or 7-point scale). For many years, the mean (average) and standard deviation were used at UBC to summarize and present quantitative data in instructor reports, a practice common at many institutions even where their usefulness and validity as metrics were being questioned. However, more recently, UBC began using different metrics to report student experience of instruction survey results. The reported metrics include the interpolated median, percent favourable and measure of dispersion suitable for ordinal data. An interactive dashboard is currently being developed to assist instructors, as well as administrators, to visualize the reported metrics within context. This workshop will introduce the new SEI metrics and demonstrate how they support instructors to make sense of the student feedback they receive. Participants will have the opportunity to discuss, in small groups, how to interpret these metrics in instructor reports. The workshop will also explore the ways in which these metrics can be used by teaching and learning specialists (for example, our Educational Consultants in our Centre for Teaching and Learning in the Okanagan, or our Centre for Teaching, Learning and Technology in Vancouver) on campus to inform conversations with both academic leaders and instructors interested in making sense of student feedback.  

Keywords: Dashboards; Reporting; Metrics; Faculty Development; Academic Leadership Development 

Takeaways:  

  • The session will highlight the limitations of using mean and standard deviation as metrics for summarizing and presenting quantitative SEI data.  
  • Attendees will be introduced to the alternative metrics UBC has adopted to report student experience of instruction survey results. These metrics include interpolated median, percent favourable, and a measure of dispersion suitable for ordinal data. The session will provide insights into why these metrics offer a more comprehensive and meaningful representation of student feedback. 
  • The development of an interactive dashboard will be showcased, which aims to assist instructors and administrators in visualizing the reported SEI metrics within a contextual framework. Participants will have the opportunity to explore and discuss how to interpret these metrics in instructor reports and understand how teaching and learning specialists can leverage them to facilitate conversations with academic leaders and instructors seeking to make sense of student feedback. 

Dr. Abdel Azim Zumrawi, Statistician, Planning and Institutional Research Office, former adjunct professor, Faculty of Forestry, University of British Columbia. 
Abdel possesses extensive experience conducting a multitude of statistical analyses on course evaluation data providing valuable insights for the university.  

Alison Wong, Project Manager, University of British Columbia
Alison leads the Student Experience of Instruction survey team within the Planning and Institutional Research Office at UBC. She takes charge of the project’s technical aspects, ensuring seamless execution and maximizing the team’s efficiency in delivering the student experience surveys at UBC.   

Tizitash Mohammed, Programmer Analyst for the Student Experience of Instruction surveys, University of British Columbia
Tizitash works in the Planning and Institutional Research Office. Her expertise and analytical skills have been instrumental in driving the development of dynamic instructor interactive dashboards.  

Dr. Stephanie McKeown Chief Institutional Research Officer, University of British Columbia 
Stephanie is responsible for the team that administers and analyses the student experience of instruction surveys. She is also the current President of the Canadian Institutional Research and Planning Association. 

 

Highlighting the Student Voice: Co-developing & Piloting Research Protocols to Explore Student Perspectives and Experiences with Course Evaluations.

Our U of T research study explores students’ perspectives and understanding of course evaluations (CEs) and uses a “Students as Partners” (SaP) approach which “makes way for respectful, mutually beneficial learning partnerships where students and staff work together on all aspects of educational endeavors” (Matthews, 2017). This study supported three student research assistants (RAs) to both co-design the focus group protocol and facilitate focus groups. Our team carefully reviewed the CE literature and RAs built their knowledge in this field and led the development of the pilot focus group script.  We sought to build upon scholarly conversations about the value of CEs, and the student-related factors that influence their utility (Basow & Martin, 2012; Toftness et al., 2018). The student collaborator role was central to framing the research from a student-facing perspective. The focus group topics included: student motivations to complete CEs; students’ understanding of the purposes and impact of CEs; and factors that impact students’ choice of course and instructor ratings. We share findings from 8 focus groups (total n=30 students), with students from diverse programs and years of study at  

U of T. RAs will share their reflections about their engagement in this innovative SaP research project focused on CEs. The student feedback collected in the focus groups will inform the development of communication strategies and educational resources to promote student engagement with the CE system and in the provision of high-quality feedback. 

Keywords: ‘Students as Partners’, Partnership, Student perspectives, Feedback, Focus Groups 

Takeaways:  

  1. The “Students as Partners” (SaP) approach is an important model for examining the context and content of course evaluations in a higher education teaching and learning context.
  2. Collecting feedback (data insights) from students provides valuable information on ways to enhance course evaluation processes.

Principle Investigator:  Professor Alison Gibbs, Director, Centre for Teaching Support & Innovation (CTSI) & Professor, Dept of Statistical Sciences​, University of Toronto   

Presenters:  

Dr. Cora Mccloy, Faculty Liaison Coordinator, SoTL, CTSI​, University of Toronto 

Selina Mae Quibrantar, B.Sc.(H), M.Sc. Candidate, Department of Nutritional Sciences​, University of Toronto 

Michele Hutrya, HBA, Anthropology & Sociology​, University of Toronto 

Jasmine Pham, Ph.D. Candidate, Educational Leadership and Policy Program, OISE, University of Toronto 

 

All opinions and views expressed by the conference presenters belong to the content creator, they do not reflect the opinions or views of the University of Toronto. Every effort has been made to offer accurate information on this site, but errors can occur. The University of Toronto assumes no liability or responsibility for any error or omissions in the content contained on this site. 

Back to Top