U of T Teaching Examples

We are asking U of T instructors how they engage with generative AI tools in the teaching. This section features instructor profiles, as well as examples of AI-integrated assessments and learning activities.

Instructor Profiles

Head shot of Elaine Khoo

Elaine Khoo, Associate Professor, Teaching Stream; Centre for Teaching and Learning, English Language Development Support Coordinator, UTSC

Headshot of Steve Easterbrook

Steve Easterbrook, Director, School of the Environment, Faculty of Arts & Science, UTSG

Head shot of Dan Zingaro wearing a black shirt and smiling

Dan Zingaro, Associate Professor, Teaching Stream and Associate Chair (CSC) Mathematical and Computational Sciences, UTM 

Noa YaariCommunication Instructor, Institute for Studies in Transdisciplinary Engineering Education and Practice (ISTEP), Faculty of Applied Science & Engineering, UTSG

Jessica Hill, Associate Professor, Teaching Stream; Department of Molecular Genetics, Temerty Faculty of Medicine, UTSG

 

Nazanin Khazra, Assistant Professor, Teaching Stream, Department of Economics, Faculty of Arts & Science, UTSG

 

Robert Bentley, Assistant Professor, Kinesiology & Physical Education, UTSG

Alexandra MacKay, Professor, Teaching Stream Joseph L. Rotman School of Management, UTSG

Assessment Examples

Thinking Critically about Vaccine Hesitancy and AI Use - Jessica Hill, MGY277

Course: MGY277, Introduction to Medical Microbiology 
Session: Fall 2023 
Instructor: Jessica Hill, Associate Professor, Teaching Stream; Department of Molecular Genetics, Temerty Faculty of Medicine 

Assessment Objectives 

MGY277 (Introduction to Medical Microbiology) is a large, online, asynchronous course that serves a roughly equal split of second-, third- and fourth-year students who are primarily enrolled in life sciences programs. A course learning objective is to boost scientific literacy related to vaccine hesitancy by analyzing its causes, developing strategies to address it, and reflecting on biases towards vaccine-hesitant individuals. While maintaining this objective, Professor Hill adapted the assignment to incorporate generative AI, guiding students to create profiles of vaccine-hesitant individuals, engage in simulated dialogues with these profiles, and critically analyze the resulting conversations. 

Assignment Process 

  1. Profile Generation: Students use the Microsoft Copilot, U of T’s institutionally-approved Generative AI tool, to create profiles of vaccine-hesitant individuals, including demographic information and reasons for hesitancy. To guide their AI literacy skill development, students are provided students with a sample prompt. 
  2. Conversation Simulation: Microsoft Copilot generates a dialogue between a vaccine-hesitant person and a friend trying to persuade them.
  3. Critical Analysis: Students evaluate the AI-generated conversation using evidence-informed strategies for addressing vaccine hesitancy.
  4. Source Evaluation: Students assess the credibility of sources used by Microsoft Copilot for profile generation. 
  5. Ethical Reflection: Students consider the ethical and social implications of using AI to generate profiles and conversations about vaccine-hesitant individuals, asking themselves: How accurate and realistic are the outputs? How might they affect my perceptions and attitudes towards vaccine-hesitant people? How might they influence my own decisions about vaccination? 
  6. Peer Discussion: Students compare their experiences and outputs with classmates on a discussion board. 

Future-Focused Student Skill Development  

This assignment aligns well with the University of Calgary’s STRIVE model for designing assessments that effectively incorporate generative AI. In particular, it exemplifies the goal of student-centeredness, as students are guided to engage with AI tools as a starting point for their learning, thereby promoting flexibility and critical thinking around AI-generated content. They are also encouraged to reflect on their own perceptions and decision-making processes regarding vaccination, which can foster awareness of self and others. In addition, this assessment aligns with the STRIVE model’s approach to integrity: It openly incorporates AI use into the learning process and specifies when and how students are to engage AI tools to support critical thinking. Students are both guided on how to cite AI-generated content, and how to effectively reflect on AI tool limitations.

Student Feedback 

Professor Hill shares: “Feedback was collected from students regarding the use of Generative AI in the course specifically. The overall response was positive, with students expressing appreciation for its integration. For instance, one student remarked, “I liked the use of Bing AI/Copilot for assignment 1, as it was different to anything I’ve done in other courses. I also appreciated how we reflected on the validity and biases of the AI-generated responses. It seems as though a lot of my other courses are very against the use of AI, so I like the way it’s been introduced in this course as a means of success, rather than a means of cheating like the other courses make it out to be.”” 

Course: HIS393, Digital History  
Session: Fall 2023 
Instructor: Dr. Noa Yaari, Communication Instructor, Institute for Studies in Transdisciplinary Engineering Education and Practice (ISTEP), Faculty of Applied Science & Engineering, UTSG 

Assessment Objectives  

A learning objective for HIS393 (Digital History) was that students develop the ability to conceptualize a new business in the field of digital history, challenging them to think creatively and entrepreneurially in relation to real-world problems. In this assessment, Dr. Yaari guided students to develop a comprehensive business model; as part of this, they created a logo for their proposed business, and were allowed to use generative AI tools in a way that inspired critical reflection on the design process. This project aims to boost creativity, capitalize on historical knowledge and technological skills, and develop communication abilities. The assessment emphasizes realistic creativity, research skills, and the ability to articulate ideas clearly in both textual and visual format.  

Assessment Process 

  1. Project Initiation: Students form teams of up to 3 members or work individually 
  2. Research and Conceptualization: Identify and analyze at least two existing digital history projects, and use this to conceptualize a new business in digital history 
  3. Business Model Development: Students complete a business model template, addressing key components, including: problem identification; customer segment analysis; unique value proposition, solution details, and revenue streams, among other categories.  
  4. Logo Creation (GenAI Use): Students are tasked with designing a logo for their proposed digital history business, with the option to utilize AI applications for this purpose. If opting for AI-assisted design, students must provide the full prompt or input used, describe the entire creation process in detail, and critically reflect on their experience with the AI-assisted design. This approach encourages transparency in AI use while inspiring critical thinking about the role of AI in creative processes. See examples of impressive logos that students have created with AI tools.
  5. Formatting and Presentation: Students ensure proper layout and multimedia integration.

Future-Focused Skilled Development 

This assignment aligns well with the University of Calgary’s STRIVE model for designing assessments that incorporate generative AI. It exemplifies student-centeredness by encouraging direct interaction with AI tools for logo creation, allowing students to critically evaluate AI-generated content. The multi-component nature of the assignment, combining written analysis with AI-assisted visual design, ensures a comprehensive and valid assessment of students’ skills. In addition, by encouraging AI use for logo creation, the assignment provides equitable opportunities for all students to engage and develop their creativity and critical thinking skills, regardless of their graphic design abilities. As a whole, this assignment promotes multimodal engagement, reflective analysis, and the development of AI literacy skills. 

Student Feedback 

One students from the course, Hanne Gilbert Sandoval (Sofi) reflects on her Business Model, including the use of GenAI, in a 3-minute video she created for the final assessment in the course, the ePortfolio. As Sofi explains in the video, using text-to-image tools enabled her to move faster and more accurately toward communicating her vision, and eventually, ponder the potential new technology brings to History.

Dr. Yaari shares: “By incorporating AI into the logo design process, students not only learned about digital history and business concepts but also gained hands-on experience with emerging technologies that are likely to play a significant role in their future work environments.” 

Course: ECO225, Big Data Tools for Economists
Session: Fall 2023 
Instructor: Nazanin Khazra, Assistant Professor, Teaching Stream, Department of Economics 

Assessment Objectives 

The following assignment was part of the course, Big Data Tools for Economists (ECO225). The objective for this assignment was to familiarize students with conducting literature reviews using AI tools, enabling them to analyze and synthesize economic research effectively and to formulate research questions.  

By leveraging AI tools like GPT and Research Rabbit, students streamlined their research process, gaining insights into economic topics while supporting their ability to critically evaluate past studies and their methodologies. 

Assessment Process 

  1. Search and Identify Key Papers: Students begin by searching for key papers related to their research idea using databases like Google Scholar or academic journals. They identify core themes, methods, and findings from these papers.
  2. AI-Assisted Literature Mapping: Using the AI tool Research Rabbit, students upload these papers and generate a visual cluster map that shows the relationships between different research areas. The map helps students visualize where their research fits within the broader academic landscape, highlighting connections across fields. This visualization finally made it possible for me to explain how different fields intersect and how to find one’s contribution to the relevant literature.
  3. Lit Review Table: Finally, students use ChatGPT to summarize the core aspects of each paper, including citation, data, method, and results. They create a table to organize this information, which serves as a foundational tool for their literature review. Students are required to verify all information and ensure the AI-generated summaries are accurate.

Future-Focused Skill Development 

This assessment aligns well with the University of Calgary’s STRIVE model, particularly in how it emphasizes transparency and supports responsibility. The exercise promotes transparency by openly incorporating AI tools like GPT and ResearchRabbit into the literature review process, requiring students to specify how they engage with these tools and acknowledge their use. It also fosters responsibility by guiding students to verify AI-generated information and critically evaluate its limitations. This not only supports students’ current research capabilities but also prepares them for the ethical considerations of AI integration in future economic research. By learning to balance the benefits of AI assistance with the need for human verification and critical thinking, students develop crucial skills for maintaining academic integrity.  

Student Feedback 

Professor Khazra shares: “Since Gen AI technologies are still relatively new, many students are not yet familiar with the effective and appropriate use of them. Guiding and supporting students in this area helps them develop valuable skills for using Gen AI in both research and professional settings. This is a skill that should be incorporated into higher education. Students found these exercises helpful during and post class.

One notable outcome of this exercise was that students revisited their literature review multiple times throughout the semester. In many instances, students were bringing their lit review tables to office hours asking about interpretations or comparing their estimates to other papers’ findings. Interpreting one’s estimates in the context of existing literature is a challenging task, even for experienced researchers. They also took these skills to the job market and their internships.”

Course: ECO225, Big Data Tools for Economists
Session: Fall 2023 
Instructor: Nazanin Khazra, Assistant Professor, Teaching Stream, Department of Economics 

Assessment Objectives 

The following assignment was part of the course, Big Data Tools for Economists (ECO225). As with the literature review, the objective for this assignment was to familiarize students with conducting literature reviews using AI tools, enabling them to analyze and synthesize economic research effectively and to formulate research questions.  

Students learned to formulate narrow research questions that address gaps in existing literature and improve their skills in interpreting and presenting data visually.

Assessment Process 

  1. Initial Question Generation: Students begin by uploading a sample of their dataset or providing a detailed explanation of the data to ChatGPT. They then prompt ChatGPT to generate ten potential research questions based on the dataset. If they have ideas of their own, they can share it at this stage.
  2. Selection and Refinement: From the list of ten questions generated by ChatGPT, students select five questions they find most interesting and relevant. They are required to critically assess these questions by identifying potential challenges or limitations associated with each one. This step encourages students to think about the feasibility and scope of their research. 
  3. Final Research Question: After refining the list, students choose one research question to pursue. This exercise saves time and encourages deeper engagement, as students can quickly see a wide range of possibilities before narrowing their focus. Students refine this finalized question through the semester as they work on their paper and adjust based on their data work.

Future-Focused Skill Development 

As with the accompanying literature review assignment described above, this assessment aligns well with the University of Calgary’s STRIVE model. The exercise especially promotes equity by offering a space for personalized learning. In allowing students to generate and refine research questions based on their own datasets or interests, the assessment creates space for diverse approaches to engagement, reflection, and knowledge creation. This approach recognizes that students have unique ways of interacting with information, developing insights, and demonstrating understanding. By tailoring the research process to individual interests and datasets, students can explore topics through methods that resonate with their personal learning preferences, fostering deeper engagement and more meaningful outcomes. In addition, the exercise allows students to revisit and refine their work throughout the semester, accommodating different learning paces and styles, and ultimately supporting an inclusive learning environment.

Student Feedback 

Professor Khazra shares: “This is a direct quote from a student email in August of 2024: “While the programming skills I gained from ECO225 were undoubtedly invaluable [in my internship], I think the most important thing I took away from the course was how to use ChatGPT effectively and efficiently. I think that could a great selling point for the course considering how in demand the usage of gen AI has become (and how bad most are at using it!)””

Course: KPE360: Advanced Cardiorespiratory Physiology
Session: Fall 2024
Instructor: Robert Bentley, Assistant Professor, Kinesiology & Physical Education, UTSG

Assessment Objectives  

The introductory laboratory aims to familiarize students with the experimental environment and develop data collection skills essential for subsequent full laboratory experiments. In addition, the assessment introduces students to the use and critical evaluation of generative AI tools within scientific contexts. 

Assessment Process 

  1. Experimental Setup: Students set up equipment including PowerLab, blood pressure sensors, and ECG electrodes. 
  2. Data Collection: Students perform a series of trials, including Upright Rest (2 minutes), Squat-to-Stand (2 minutes rest + 1 minute squat + 2 minutes recovery), Stand-to-Lying down (2 minutes rest + 2 minutes lying down) and a Valsalva Maneuver (2 minutes rest + 10 sec Valsalva + 2 minutes recovery).
  3. Data Analysis: Students analyze heart rate and blood pressure data using LabChart software.
  4. Generative AI Component: Students use ChatGPT or Microsoft Copilot to generate a response to the question “In 250 words, explain why I feel light headed when rising from a squat?” They then critically evaluate and correct the AI-generated response based on provided scientific literature. 
  5. Report Writing: Students prepare a report including: Title Page; Methods; Written Results; Tables/Figures; and Generative AI Question response. 

Future-Focused Skilled Development 

This laboratory assignment exemplifies key aspects of the University of Calgary’s STRIVE model, particularly in terms of student-centeredness and validity. The student-centered approach is evident in how the laboratory engages students directly with experimental equipment and data analysis software, promoting hands-on learning and critical thinking; students collect their own physiological data, analyze it using LabChart software, interpret the results, and analyze Generative AI output. This active engagement allows students to take ownership of their learning process and develop practical skills essential for junior scholars of kinesiology and physical education. The validity of the assessment is ensured through its multi-component nature and clear alignment with specific learning objectives. The inclusion of a generative AI component further supports the validity by requiring students to critically evaluate AI-generated content against scientific literature, developing their digital literacy and critical thinking skills. This comprehensive approach ensures that the assessment accurately measures a range of skills and knowledge relevant to the course objectives and future professional practice.   

Student Feedback 

Professor Bentely shares: “Overall, students appreciated the incorporation of generative AI, and the resulting development of AI literacy, given its rapidly developing relevance. Further, some students were surprised that the generated AI response was seemingly unrelated to the provided prompt while other students thought the AI did a reasonable laymans explanation but lacked detail. It seems the takeaway by the students is that while generative AI may provide a starting point, critical assessment is required.

Course: HMB474H1 F (Dental Sciences) 
Session: Fall 2024, Wednesdays, 1:00 PM – 3:00 PM, In Person 
Instructor: Morris F Manolson, Professor and Vice Dean, Research at the Faculty of Dentistry with a cross-appointment in the Faculty of Medicine 

Assessment Objectives 

The midterm assignment for HMB474H1 F (Dental Sciences) aims to develop students’ ability to comprehensively research and describe a clinical problem in dental sciences, while also enhancing their critical thinking skills through the evaluation of AI-generated content. By requiring students to write a literature review and compare it with AI-generated work, the assignment seeks to familiarize them with the use of generative AI tools in academic research and writing. Additionally, it aims to improve students’ understanding of literature review structure and scientific writing, while fostering awareness of the strengths and limitations of AI in producing academic content. This multifaceted approach is designed to prepare students for the evolving landscape of academic research and professional practice in dental sciences. 

Assessment Process 

  • Step 1: Student-written literature review (2,000 words)
    • Develop an outline with specific subtitles based on the provided structure 
    • Write a comprehensive literature review on an assigned clinical problem 
    • Include complete references 
  • Step 2: AI-Generated Content Evaluation (500 words) 
    • Use Microsoft Copilot to generate three outputs:
      1. 1,000-word general literature review
      2. 500-word detailed section on a specific aspect
      3. 500-word future directions section 
    • Critically evaluate the AI-generated content, comparing it to the student’s own work 
  • Step 3: Submit two PDF documents: the student’s literature review, and the critical evaluation and AI-generated content. 

Future-Focused Student Skill Development 

The assignment highlights key aspects of the University of Calgary’s STRIVE model, including student-centeredness and integrity. By using Microsoft Copilot to generate content related to their research topic, students develop flexibility and critical thinking skills around AI-generated content. The assignment promotes academic integrity by openly incorporating AI use, specifying engagement methods, and teaching proper citation of AI-generated content. Students are encouraged to reflect on their own perceptions and decision-making processes, fostering self-awareness and a deeper understanding of AI tools’ strengths and limitations in academic research. This approach not only maintains academic honesty but also prepares students for responsible AI use in their future academic and professional pursuits. 

Student Feedback 

We have not received formal evaluations yet.  They should be coming soon.  I asked the class after the assignment whether they thought it was worthwhile, and the class was split right down the middle!  The students who thought it was a waste of time were already using AI very effectively and got nothing new out of the assignment.  Half the class had not used AI and really appreciated the fact that we were embracing this technology and helping them to use it effectively.  ALL the students really appreciated the talk by the Goggle executive on how to use AI effectively.  Great tips from the expert!   

Course: RSM230H 
Session: Fall 2024 
Instructor: Alexandra MacKay, Professor, Teaching Stream 

Assessment Objectives 

This assignment aims to develop students’ analytical skills in finance by examining the relationship between news events and equity prices. It integrates the use of generative AI tools to support the writing process, fostering critical thinking and improving students’ ability to work effectively with AI technologies. Through this assignment, students gain practical experience in financial data analysis, ethical AI usage, and academic writing, while also reflecting on the impact of AI on their learning and skill development in preparation for future careers in finance.  

Assessment Process 

  1. Schedule a meeting with a writing coach from the Rotman Commerce Centre for Professional Skills (RC-CPS).
  2. Research the stock price of DJT (Trump Media & Technology Group Corp) for the specified timeframe.
  3. Develop initial ideas about the relationship between the news events and stock price movements.
  4. Use an LLM to assist in writing the first draft of the essay (400-600 words).
  5. Meet with the RC-CPS writing coach to receive feedback on the AI-assisted draft.
  6. Revise the draft based on the writing coach’s feedback.
  7. Submit the final essay, addressing the market’s reaction to Trump’s social media post and interview, interpreting the financial data, and discussing potential future impacts.
  8. Later in the course, complete a second essay using a different approach: write the first draft independently without AI assistance, then use an LLM to critique, edit, and improve the self-written draft.
  9. Reflect on the two different approaches to using AI in the writing process and their impact on learning and skill development.

Future-Focused Skill Development

This assignment directly aligns with the University of Calgary’s STRIVE model for designing assessments that effectively incorporate generative AI. It aligns with the STRIVE model’s approach to transparency and responsibility. It openly incorporates AI use into the learning process and specifies when and how students are to engage AI tools to support their writing. Students are guided to use AI for drafting and critiquing, encouraging them to be accountable for content creation and recognize the potential overreliance on AI. Furthermore, this assignment promotes integrity by requiring students to work with writing coaches and revise their AI-assisted drafts, modeling appropriate use of AI and critiquing AI-generated output for accuracy. The comparison between AI-assisted and self-written drafts also develops students’ meta-cognitive skills through self-reflection, aligning with the validity aspect of the STRIVE model.

Student Feedback 

What follows are anonymized excerpts from the reflections submitted after the second individual writing assignment was submitted. 

Throughout this course, I can definitely say that using AI has been an interesting part of the essay writing process. Personally, I found that writing the second essay made me think about what I wanted to say, and I had to do background research to back up my statements and use AI in a way more similar to websites such as “Grammarly.” In contrast, using AI for the first essay was unique in the aspect that, although I did research to create a detailed prompt, I didn’t really have to worry about structuring the essay and was able to get a decent framework that I was able to add onto. However, I think AI does have its flaws. It has a lot of implicit biases, returns vague responses, and allows the writer to ‘write’ without really thinking about what they are trying to say. With the ever-growing demand and reliance on AI, I believe that AI will become an integral part of essay writing. Whether it is to streamline research, create outlines, check grammar, or even write the text for you, AI simply allows for a more efficient process. Ultimately, I still believe that it is important for students and professionals alike to continue to try their best to think creatively before relying on AI. If people began to use AI for everything with no second thought, there would be no differing viewpoints. AI should not replace humans but should be a tool that promotes our productivity. 

**  

In the first essay, I generated a draft with AI and then revised it based on feedback from my writing assistant. I appreciated the detailed feedback, but I sometimes felt like the comments were focused on ideas or content that didn’t feel entirely my own. This left me feeling that the final essay, while polished, lacked a bit of my personal voice and connection to the topic. Additionally, sometimes AI lacks the ability to do analyses in depth but rather makes shallow observations, which is again undesirable.  

For the second essay, the process was reversed, writing the initial draft myself and then using AI to refine it. This approach felt more authentic, as I could better express my personal perspective from the start. Using AI for feedback and editing allowed me to improve clarity and organization without compromising my individuality. The AI’s ability to highlight areas for improvement, suggest more concise phrasing, or point out gaps in logic was incredibly helpful. However, I still needed to adapt its suggestions to fit my style, ensuring the essay truly reflected my thoughts.  

I can now see AI as a useful collaborator rather than a creator. In the future, I plan to use it for brainstorming, refining structure, and polishing my writing, while keeping the core ideas and tone uniquely my own. This balance allows me to stay true to my voice while benefiting from AI’s strengths. 

** 

I want to say that I really enjoyed the use of AI and LLMs in this class and I think it is a very interesting concept that was very well done. I liked how this class embraced AI and let us learn how to use it instead of having the zero AI policy like other classes. It was also very interesting to see how they compare when they write it and you correct it and vice versa. My experience with getting it to write a draft was mostly positive except for the fact that there was a lot of fact checking needed. A decent amount of the information was incorrect and even though it sounded good, it wasn’t right. However, I think that making your own rough draft and getting AI to correct it is great. It has all of your original ideas and credibility, but corrects it, makes it flow better, uses more in depth sentences and gives great title options. I will definitely continue to use this method in the future since the outcome was very good. For prompts, I quickly realised that the more specific the better. You aren’t being mean to AI so give it whatever instructions needed. AI is a great resource and will undoubtedly be a huge part of writing in the future especially with how it keeps evolving. Lastly, we have to remember that we can’t strictly rely on AI and still need to learn how to do things ourselves. 

** 

In my experience, AI is a helpful, but imperfect tool when writing essays. For the first essay, I used AI to make an essay based on a series of prompts. While AI was able to develop an essay with strong points, I found structural issues. I found AI repeats the same words, when it may not be needed; it also overcomplicates sentences. Ultimately, I had to go over the essay to comb out errors. This shows AI is not the best tool for writing an essay, as you still have to do work to perfect it. For the second essay, I used AI as a corrective tool, feeding it my essay and an outline of what I wanted feedback on. In return, it fed me lots of feedback, with varying levels of clarity. I got feedback on my essay structure, and how to remake my essay into the word count. However, when it came to the actual content of my essay, it was confusing. It told me to add a personal connection to the essay but to take some of the examples I put in. I did end up taking its advice on my word count structure, and I did my best to follow its feedback on my content. In the future, I plan to use AI to aid me in creating an outline or starting point for me to follow. I found that it succeeded best in creating ideas and points, but not in doing proper writing. 

**  

After completing Individual Essays 1 and 2, I developed a greater understanding about the importance of ethical use of AI and LLMs for academic purposes. Prior to taking this course, I was afraid of even going near AI or LLMs because I feared any kind of use would mean committing an academic offense. However, I learned that as long as used ethically, they are useful for enhancing writing. When we were assigned the first individual essay, I liked using Microsoft Copilot to draft the essay first because of its convenience and timeliness. There were no concerns about how long the writing process would be, since all it took was a prompt for Copilot to generate the essay within a few seconds. Although this aspect was helpful, after reviewing the essay I realized the writing lacked a proper essay structure and wasn’t as detailed as I expected. If it were me who wrote the draft first, I would have conveyed my ideas in a more structured manner. The hassle of using AI to write the whole essay was editing its product afterwards, as everything was disorganized and very different from my style of writing. I also noticed the essay lacked detailed content. 

This experience has shaped my opinion on AI’s role in supporting essay writing. I discovered it is more beneficial to brainstorm your own ideas when generating a draft essay, and then consult AI/LLMs for reviewing the work and providing you with feedback on how to improve. It worked better that way for me in the second essay, and I believe it is better when AI can help out with editing through ways such as giving suggestions on how to make the text more concise. I anticipate using AI for support with essay writing in the future with this method in mind. I feel that when you choose to write your essay first, you are able to give your writing a unique voice and its own personality. 

In my experience with LLM this semester, I found that they are extremely useful on one hand to help revise and generate ideas, but addictive on the other hand. In the essays this semester, I found it convenient to draft the first essay with AI, then it wasn’t hard or stressful to read through and edit the AI draft, since it was already fluent. However, when it came to the second essay, I would say it was harder to come up with my draft than before, due to the easiness of the first draft that LLM came up with. However, using the LLM’s ideas to help me edit my second essay was also helpful and timely, because the computer operates fast and points out the critiques.Using AI to write essays was helpful, however it wasn’t very easy to get what you precisely want. Commonly, AI generates stuff that is off topic and not what you want, so you have to be very specific in the prompt that you give to AI and instruct it do what you want it to do. In the future, I plan to incorporate AI into my writings, however, I will generate primary ideas, and only tell Ai to do the “thoughtless” work, such as editing grammar. This way, I won’t be too reliant on AI but also get the benefit of having AI assistance. 

Learning Activity Examples

AI Chatbots for Large Undergraduate Biology Courses - Kenneth Yip, BIO130 & BIO230

Courses: BIO130H1: Molecular and Cell Biology and BIO230H1: From Genes to Organisms
Session: Winter LEC5101 Thursdays 6pm-9pm; Fall LEC0101 Tuesdays 12pm-1pm Thursdays 1pm-2pm; Fall LEC5101 Tuesdays 6pm-9pm
Instructor: Kenneth Yip, Assistant Professor, Cell and Systems Biology

Learning Activity Objectives

The course-specific generative AI chat tools, named ChatBIO130 and ChatBIO230, aim to support student learning by:

  1. Providing personalized, on-demand assistance with course content and concepts
  2. Offering space for students to engage in self-directed practice that prepares them for summative assessments
  3. Integrating lecture and laboratory material to create a comprehensive learning resource
  4. Improving the overall student curricular experience in large molecular and cellular biology courses

Learning Activity Process

  1. AI Chatbot Interaction: Students engage with ChatBIO130 or ChatBIO230 as an optional resource throughout the course.
  2. Personalized Learning: The chatbots provide tailored explanations and examples based on individual student queries.
  3. Formative Assessment: Students can practice with multiple-choice and short-answer questions generated by the AI, receiving immediate feedback.
  4. Laboratory Integration: The chatbots incorporate laboratory, connecting theoretical concepts with practical applications.
  5. Guided Usage: Students are provided with comprehensive documentation on appropriate chatbot use, limitations, and alternative resources.
  6. Continuous Improvement: The teaching team regularly updates the chatbots’ knowledge base and capabilities based on student feedback and usage data.

Fostering Inclusive Learning Environments

Overall, the integration of the virtual chatbot supports the creation of an inclusive and effective educational environment, which can be challenging in the context of large courses. First, the chatbot supports student-centered learning by inviting students to explore course content at their own pace and focus on areas they find challenging, promoting autonomy and personalized learning. In addition, it supports differentiated instruction: The chatbots’ ability to provide personalized responses supports diverse learning styles and needs, a cornerstone of inclusive education. Lastly, the chatbots provide a unique opportunity for students to actively engage with course material in real-time, even in large classes where individual interaction with instructors may be limited. This allows students to ask questions, seek clarification, and explore topics in depth at their own pace, promoting a more interactive and personalized learning experience, despite the challenges of large class sizes.

Student Feedback

The implementation of ChatBIO130 and ChatBIO230 has demonstrated exceptional student engagement and educational impact. There is particularly intensive usage observed during key academic periods such as pre-examination preparation. The completely optional nature of these tools makes their widespread adoption especially noteworthy, suggesting that students find genuine value in the resource.

Survey responses highlight three key benefits:

  1. Reduced Academic Stress: A significant majority of students reported decreased anxiety levels when having access to the chat tools
  2. Accessibility: Students particularly value the 24/7 availability of the resource, allowing them to seek help at any time
  3. Learning Support: The ability to ask sequential questions enables students to develop deeper understanding through progressive inquiry

The overwhelming student support for continued development of these tools suggests that they are meeting a crucial need in large-format biology courses, particularly in providing personalized learning support at scale.

Course: RSM230H 
Session: Fall 2024 
Instructor: Alexandra MacKay, Professor, Teaching Stream 

Learning Activity Objectives: 

The objectives for this activity focus on developing students’ ability to engage with Generative AI in a thoughtful and ethical manner. The primary goal is for students to create a personalized framework that guides their use of GenAI in both academic and professional settings. This process is designed to enhance students’ self-awareness by prompting them to reflect deeply on their learning goals, core values, and career aspirations. Students are also expected to identify and evaluate the core competencies essential for achieving their academic and career objectives. A critical component of the activity is the requirement for students to assess the potential benefits and risks associated with using GenAI, particularly in terms of how it may augment their skills or potentially lead to deskilling in certain areas.  

Learning Activity Process: 

  1. Students reflect on their short-term and long-term learning goals, core values, and career aspirations. 
  2. They identify core competencies required to achieve these goals and assess their current skill level in each.
  3. Students analyze how GenAI can augment their competencies or potentially lead to deskilling.
  4. They create a table (see below) outlining their goals, competencies, current skill levels, plans for AI use, and potential impacts.
  5. Students establish a timeline for reviewing and revising their personal framework.
  6. They identify resources for staying informed about ethical AI use and its impact on academic and professional development.
  7. Regular reflection and adjustment of the framework are encouraged to ensure alignment with evolving goals and AI advancements. 

Submit an Assessment or Learning Activity

We are asking U of T instructors how they engage with generative AI tools in the teaching. As this section grows, we will include brief profiles and examples of assessments and offerings.

If you would like to share an assessment that uses generative AI, or an example of how you and your students engage with generative AI in your course, please complete this online form.

There are a growing number of generative AI tools available and the capabilities of these tools is evolving at a rapid rate. Currently, Microsoft Copilot is the recommended generative AI tool to use at U of T. When a user signs in using University credentials, Microsoft Copilot conforms to U of T’s privacy and security standards (i.e., does not share any data with Microsoft or any other company). In addition, Contact North AI Tutor Pro and Contact North AI Teacher’s Assistant Pro conform to U of T’s privacy and security standards. Please be aware that any other generative AI tool used within a U of T course or organization that has not been vetted for privacy or copyright concerns should be used with caution. If you would like to learn more about the tools available in U of T’s academic toolbox, please visit ARC’s New Tools.

Back to Top