Prompt Engineering 101: Optimize the use of AI in TBL Teaching

:wave: We’re just a few days away from our event, Prompt Engineering 101: Optimize the use of AI in TBL Teaching with @znoel!

:film_projector: Check out this short clip to catch a glimpse of what you can get from this session:

:bulb: Here’s what you can expect to learn:

  1. Find out what’s Prompt Engineering: Understand the basics and benefits of creating effective prompts that guide AI to provide desired output
  2. Get Hands-On Practice: Get practical experience in creating prompts that improve Team-Based Learning activities using ChatGPT.
  3. Discuss Real-World Applications: Explore how AI-driven prompt generation techniques can be used in different educational tasks, such as creating questions.

Do you have any other questions about the session? Reply to this thread :point_down:

Feel free to contribute to this google sheet on how educators are using AI for teaching. Here is the link: CognaLearn Workshop: AI - Google Sheets


:star_struck: Key takeaways

3 steps for effective prompt engineering:

  1. Role: Define the role, or a persona, to tailor the ‘flavor’ of the output
  2. Input: Create clear instructions
  3. Output: Define the length, audience, style & tone, and format

Best practices for optimizing outputs:

  1. Be specific - consider things like context, tone, and format
  2. Provide examples or illustrations
  3. State what to do rather than what not to do (AI usually respond better to positive, rather than negative, language)
  4. Understand limitations in the model you’re using – different models, different limitations
  5. Trial & error

Some tips from our participants:

  1. Always ask the GPT to provide detailed explanation for each answer option generated; helpful for facilitators as it also provide ideas to prompt discussions
  2. Go from broad to specific in writing the prompts as it helps to be very explicit and generates higher quality of output
  3. Define acronyms like TBL and 4S to make sure that you’re giving the right context to AI
  4. Use backwards design to make the whole TBL module on AI instead of using it to just create the applications part of TBL as it prevents hallucinations (when an AI model generates incorrect information but presents it as if it were a fact)
  5. Add this comment into application generation “All options should be plausible and/or comparable in complexity” because if all outputs have the single best answer, it would not create the discussion that educators aim for in their application questions
  6. Specifying academic levels also help with generating desired output
  7. Uploading materials such as textbooks referenced for creating teaching materials on AI helps to reduce hallucination too because it filters things specific from the textbook and reduces time to correct or sanitize GPT’s output

Want to share any more tips or discuss on the topic of prompt engineering further? Comment down the thread below! :hugs:

3 Likes

Eventbrite link does not contain a booking link - please can you confirm joining details for the session?

Many thanks

Laura

Hi @Heblau thanks for pointing it out, I’ve updated it. You can also click on the link here to register for the event. Let me know if it works for you! :slight_smile:

Hi everyone! As the workshop is ongoing, feel free to contribute to this google sheet on how you are using AI for teaching. Here is the link: CognaLearn Workshop: AI - Google Sheets

I wonder if Sandy Cook’s @SandyCook could be used to help prompt engineerings

​​Writing Effective Multiple-Choice Questions for Readiness Assurance Process in Team-based Learning (intedashboard.com)

Basic Rules for Constructing Effective Multiple-Choice Items

Now let’s explore some basic rules that can elevate the quality and impact of your questions:

  1. Focus each item on an important concept, principle, or complex idea, and use real-world examples. Avoid trivial facts.
  2. Pose clear questions in the stem that can be answered without seeing the answer choices, avoiding irrelevant material.
  3. Avoid negatively stated stems unless necessary for specific learning outcomes, such as identifying dangerous practices.
  4. Ensure all alternatives are plausible, mutually exclusive, and of the same category.
  5. Avoid overlap in content, present alternatives in a logical order, and maintain consistent length and clarity.
  6. Exclude options like ‘all of the above’ or 'none of the above’ , as they may allow students to guess correctly based on partial knowledge.
2 Likes

Shreya

I wasn’t able to get onto Zoom . Is the workshop being recorded?

Hi! Yes, we will send you the recording. Alternatively, you can join now: https://us06web.zoom.us/j/86836681853

Thanks for the acknowledgment :stuck_out_tongue_winking_eye:

Sandy COOK, PhD, Professor Emeritus

Duke-NUS Medical School, Singapore

Trainer/Consultant – Team-Based Learning Collaborative

Ideas from the workshop about prompting AI to make better MCQs for applications:

  • Specify your role
  • Specify the learning objectives
  • Specify the levels of the students
  • Specify the number of choices
  • Specify the length
  • Provide a scenario / case example
  • Suggest the Sandy Cook MCQ rules
  • Consider a Bloom’s Taxonomy level
  • Instruct for not an obvious right or wrong answer
2 Likes

So sad I missed the start of the session - it looked amazing. I will watch the recording though. I am really keen to continue this discussion and hear of all the different ways people are using AI / TBL

2 Likes