Solving the AI Integration Dilemma
From Needs-Based Solutions to Play-Driven Innovation
“Man is only fully human when he plays.”
Schiller, Letters on the Aesthetic Education of Man (1794)
Summary
AI integration will only be synonymous with progress if it helps address real needs and actually improve education
However, AI integration is needed for educators to realize how transformative these new technologies can be
Innovation Design and a play-based approach might therefore be preferable to Design Thinking, as they make AI integration truly revolutionary
AI-Integration: Beyond Design Thinking
Whenever a new technology emerges, there is always a risk that it will find its way into the classroom (and schools in general) simply because it is original and trendy. It seems, therefore, common sense and good practice to follow a simple principle: tech integration should not be about the technology and what it can do, but about students, teachers, staff, administrators, and what they need.
The idea that GenAI-integration, for instance, will only be synonymous with progress if it solves actual problems would lead us to adopt a Design Thinking approach, where the starting point is to “empathize” with end users, rather than to showcase the prowess of Large Language Models, or to “play around” with the latest application.
Yet, while it may seem like a good idea, starting with people’s needs can prove as problematic as integrating this technology merely because it is fascinating and available.
For one thing, when it comes to such a new and fast-developing technology, people do not know what they do not know. Few teachers knew that they needed chatbots automating viva voces such as Sherpa, just like few administrators knew that they needed “zaps” allowing them to think-out-loud and have a series of agents turn their voice recordings into polished observation feedback emails aligned with their school’s philosophy of teaching and learning — simply because they did not know that this was even possible.
But new technologies do not simply create opportunities that were inimaginable before. By doing so, they also shed light on habits and norms that have become so natural to us that we fail to see their limits and flaws. Starting with people’s needs, we might only end up with GenAI tools that speed up the creation of generic slides and the grading of traditional essays. By solving existing problems, what this type of AI-integration really does is to enable us to salvage and perpetuate problematic systems and practices. In a nutshell, they treat the symptoms — and let their root causes worsen.
AI Integration and Innovation Design
It seems like we have a double problem, or a contradiction, here, which we can call the “AI Integration Dilemma”: AI-integration should be guided by what people really need, and not simply by what this technology makes newly possible — but it is only by playing around with its latest developments and applications that we can come to realize “what we didn’t know we needed” and, even more importantly, “what we only assumed we needed”. As is often the case, however, 2 problems = 1 solution, as these issues can cancel each other out.
This is very much in line with Innovation Design, a radical approach according to which:
True innovation does not come from identifying and solving people’s problems, because these problems are rooted in “the way we do things” — entire “worlds” of habitual behaviors, shared beliefs, and familiar structures that need to be dismissed for these problems to disappear into irrelevance.
True innovation does not come from ideating and testing solutions, because these solutions would still be rooted in “the way we think” and the limiting assumptions that frame these worlds.
True innovation comes from intentionally deviating from norms and assumptions, playing around with random opportunities, and emulating the natural process of evolution by taking advantage of the entirely new possibilities they create.
The Case of AI Misuse
AI “cheating” is a good example. When ChatGPT became a household name, the immediate concern in education was that students would misuse this new technology to do their work for them, thus developing academic dishonesty and bypassing the intended learning.
Now, imagine two schools. Adopting a traditional Design Thinking approach to AI Integration, the first starts by identifying a need: preventing AI misuse, and develops solutions, such as using AI detectors, or returning to AI-proof in-class exams.
This is a good example, because the problem (AI misuse) is rooted in a world dominated by habits (such as assigning take-home essays) and beliefs (for instance that students must complete the work assigned unaided in order to demonstrate their learning). Prisoner of these assumptions, the proposed solutions leave this world intact (when the AI revolution makes it outdated), and indeed attempt to salvage it (when the AI revolution shows it to be inherently flawed), thus wasting an opportunity to be truly innovative and changing the world of education.
The second school, however, espouses Innovation Design. Encouraging its teachers to “play around” with these new GenAI tools, it allows for small experiments — or games, really. The first rule of a game is always to create its own reality and set up a new stage:
“What if I let my students use AI to write their essay and ‘do all of the work’ for them?”, a teacher from this school might wonder. “Does that necessarily mean that they wouldn’t achieve the learning objectives, or that it would be impossible to tell? We always assumed this to be the case — but is it still the case? Was that true by definition — or simply because of the limited technology we had access to at the time? I’ve been having fun with this new app, Sherpa, which interviews students to check their understanding of a piece of writing. Now, wouldn’t that allow me to assess the learning they have gained while writing their essays, regardless of the extent to which they have used AI to do so? Wasn’t that the point of assigning the essay in the first place: not to ‘do the work’, but to learn in the process? Is writing an essay the only or even the best way to develop the targeted understandings and skills? What about seeing an AI do it (Social Learning), managing its work (Meta-Cognitive Prompting), reading its product and explaining (Protégé Effect) or criticizing it (Dialectical Learning)? Sure enough, an important goal was to enhance students’ ability to organize their thoughts logically and to communicate them clearly — but wouldn’t an automated oral examination do that, and do so in a way that is more relevant to the new world in which students will have to live?”
Reimagining Education
In this new world, teachers do not assign work to be completed and authenticated, but create playful, performative learning experiences where students can take full advantage of the help provided by AI, all while developing the targeted understandings and skills (or rather new, more meaningful ones, including AI literacy), and being able to practice and demonstrate them in real-life contexts.
Just like many other innovative options, assessing students through 1:1 conversations (or by critiquing a witness, debating an expert, teaching a virtual peer, prompting an assistant, or crafting a dedicated agentic flow, etc.) did not cross more educators’ minds (at least as a regular strategy and scalable solution) because, until we played around with new AI toys, we remained prisoners of limited, and flawed, ways of doing and thinking that are becoming obsolete.
The AI revolution invites us to reinvent the world of education. The same was true of previous technological disruptions, but advances in artificial intelligence are arguably different. Previous innovations, such as iPad apps, or even the popular web-based tools that became every teacher’s favorite during the pandemic (Padlet, Quizlet, EdPuzzle, and so on), were powerful tools, but clearly defined in their functionalities. AI technologies, however, make it possible for educators to develop their own innovations — be it by designing an assistant to behave a certain way based on a certain knowledge, or by crafting an information processing flow for whatever input and output they want. In that sense, “GenAI” is not so much a tool, or set of tools, but rather an energy source, like electricity — the ability to leverage computer-powered cognitive processes, for any purpose, with natural language.
Playing Safe
While play allows us to explore this newfound power and its revolutionary potential in education, it remains of course paramount that these trials be safe — another AI Integration Dilemma, as we need to combine free experimentation and robust guardrails.




