Dear TBL Community,
I am writing to open a discussion on a challenge that many of us are facing in our classrooms: the impact of generative AI on the Team Application (4S) phase.
As we know, AI is no longer just a separate tool; it has become the default interface for information. With AI-generated summaries now appearing at the top of every Google search, our students’ first point of contact with any query is often a synthesized answer rather than a set of raw sources.
Since Team Applications are traditionally “open-resource” activities (books, notes, and internet access), this raises a fundamental question about the integrity of the 4S framework.
A common suggestion is to ask students to generate an AI response and then “critique or correct” it. However, I find this approach somewhat partial and perhaps insufficient for deep learning.
How can we design 4S activities that remain stimulating and challenging when a plausible (though not always accurate) solution is only a prompt away?
Should we find new ways to weave AI into the “Specific Choice” and “Significant Problem” components, or should we consider returning to “analog” environments (paper-based activities without internet access) to prevent uncritical reliance on AI and ensure genuine cognitive effort?
I would love to hear your thoughts on:
-
Have you developed specific 4S designs that are “AI-resistant” or, conversely, “AI-enhanced”?
-
Are there resources or best practices for maintaining the “challenge” level during Team Applications in this new landscape?
-
Do you believe that removing internet access during the 4S phase is a step backward, or a necessary move to protect the TBL process?
I look forward to learning from your experiences and hearing how you are adapting our beloved TBL framework to these changing times.
Best regards,
Marina Di Carro
Associate professor of Analytical Chemistry
University of Genoa, Italy