GenAI in Education#

What does the future look like#

Opinions on AI’s role in education vary widely. Some envision it revolutionizing teaching and research, bringing personalized learning to every student. Others see it as a useful but limited tool for automating repetitive tasks. Marc Natanagara, an experienced educator, encourages educators to embrace AI as a “co-teacher” rather than resisting it. This approach shifts the learning focus toward skills AI can’t replicate: critical thinking, creativity, and emotional intelligence. His message is to rethink how you teach and prepare students for a future where human and AI skills work together, not in competition.

To make this work, Natanagara suggests building lessons around activities. For example, debates, group projects, hands-on problem-solving, and real-world applications of concepts. Thus, instead of just assigning an essay on climate change, have students collaborate on a project to analyze local environmental data and propose solutions. Use AI tools to help them gather information quickly, then shift the focus to discussion and creative problem-solving in class. This way, AI becomes a support tool, not a replacement for real learning. Although this may seem like more work initially, the lesson planning section can help standardize this. This transition will lead to students who are actively engaged and excited to learn, transforming dull lectures into dynamic, in-depth learning experiences.

Optional Video: Natanagara, The Future of Education

Some believe that AI is confined to certain fields and cannot be used in engineering. Research by TU Delft shows it has applications in every domain. While LLMs currently performs best with basic bachelor-level questions, that’s just the beginning. Its rapid progress suggests it won’t be long before it can handle more complex problems, making it a tool engineers can’t afford to ignore. Preventing students from working with AI now only sets them back later when these skills are expected in their careers.

Where do we draw the line#

Research shows that detecting AI-generated content is extremely difficult, and that’s not likely to change anytime soon. AI detectors are unreliable, especially as LLMs improve and students use AI alongside their work. The bigger issue is false positives. Students who haven’t used AI can still get flagged, leading to unfair consequences. Since AI use is already widespread, the question is: How should teachers respond? As Erik Winerö highlights educators are on a tightrope, balancing the potential benefits of AI against the risks of misuse.

Optional Video: Winerö, Cheating or Learning?

Firstly, it’s important to decide where you draw the line. Harvard’s AI policy offers a good starting point for thinking this through, and TU Delft is currently developing its own university-wide guidelines for AI use in education. These provide a helpful framework for regulating AI in a fair way, but this approach is mostly reactive. It focuses on limiting AI rather than adapting to it.

A more effective strategy is to rethink how assignments are designed. The goal isn’t to ban AI but to make tasks that are harder for AI to replicate. Small changes can go a long way. For example, instead of assigning a generic essay on a broad topic, ask students to reflect on their own experiences. AI struggles with context and can’t replicate personal insights or critical thinking. The more familiar you become with these tools, the more intentional you can be in how you use AI to improve learning and keep students motivated.