Use AI to Create Interactive Games and Bring Textbook Stories to Life

,

一泽Eze

Matrix Featured Article

Matrix is the writing community of SSPAI, where we encourage sharing authentic product experiences, practical insights, and thoughtful reflections. We regularly feature the finest Matrix articles to showcase genuine perspectives from real users.

The article represents the author’s personal views. SSPAI has only made minor edits to the title and formatting.Featured on the Matrix homepage


A teacher used AI to create an interactive game based on “Lin Daiyu’s First Visit to the Jia Mansion.”

Seeing something like this, were you about to scroll past it? Another flashy AI showcase, right?But after I saw the full piece, I paused for a long time. Not because of any particularly complex technology, but because I suddenly realized: after talking about AI Coding for an entire year, maybe we’ve been looking in the wrong direction.

The game itself is simple: students take on the perspective of Daiyu, guiding the direction of the story, with each scene accompanied by one illustration.

But when I put myself back into the mindset of my student days, I instantly understood the charm.
Instead of passively listening to the story, students experience its progression, and can even explore “what if I made a different choice” possibilities.

However, after seeing the forty-plus rounds of prompts pulled back and forth behind this project, I noticed a problem:

To build this game, the creator had to constantly switch between coding platforms and AI image-generation tools, going back and forth in dialogue. So is there a way to make the process simpler?

In other words: what if making an interactive game like this could be as simple as writing a single sentence? What would happen then? With that question in mind, I tried another method.
The result was this — an interactive courseware format that fits classroom teaching remarkably well.

It can also look more like an immersive story-driven game:

Yes — from inputting the idea to getting the complete game, there’s no manual image generation, no switching between multiple tools, and no adjusting code or matching assets.

All of it comes from one single prompt.
And in this article, I’ll share the entire method with you, along with two style templates.

📍 Starting Here

The core design of this method is simple: choose the right scenario, give the AI more room to operate, and let it reach its upper limits of intelligence.

For implementation, I used two main tools:

  • Claude Code + Skill: Claude Code is an agent framework that provides the plan-and-execute action space; Skill can be thought of as a capability pack that, for this task, guides the AI through image generation.
  • Doubao Seed-Code Model: ByteDance’s latest model and the first domestic multimodal coding model. It drives the agent, completes the game development, and provides multimodal understanding so the AI can “interpret” the generated images and adapt UI design.

Using them, the entire process of “creating an interactive game with one sentence” is automated:

  • Provide the plot text: You can simply give a title and let the AI recall world knowledge, or provide the original story directly.
  • Identify key scenes: The AI recognizes narrative turning points and splits the story into 5–10 key moments.
  • Design scene illustrations: AI-generated images are ideal. Traditionally, users had to craft consistent image-generation prompts, download images from a separate platform, then upload them into the coding tool — a time-consuming workflow.
  • Game development: Includes designing scene options and feedback, implementing interactions, performing multimodal analysis of the illustrations, extracting stylistic elements, and unifying all UI components.

If any of this looks confusing, don’t worry — and don’t let the black-window command line scare you. Just follow the guide below and, even with zero AI background, you can use top-tier agent workflows to create these games with a single sentence.

1️⃣ Install Claude Code

Although Claude Code is very easy to use — and I’ve covered installation many times — new readers might need a refresher. If you already installed it, feel free to skip ahead. Open the Terminal/Command Line tool on your computer:

Follow the official installation guide https://code.claude.com/docs/en/quickstart#native-install-recommended to complete the Claude Code installation.

Not sure how? No worries—send the following prompt to any AI and it will walk you through the entire process step by step.

Using the information below as a reference, guide me step-by-step to install this program in the terminal on [Mac / Windows / Linux]: [paste the installation instructions from the link above here] If I run into questions or errors, I will send you the terminal logs — please help me troubleshoot and resolve them.


Reference the following information and guide me step by step to install this program in my Mac/Windows/Linux terminal: [Paste the installation instructions from the link above here]
If I run into confusion or errors, I’ll send you the terminal logs—please help me fix them.

If there’s an error, just send it a screenshot—most issues can be resolved easily. You can also ask the AI, “I’m on Mac / Windows—how do I open my terminal?”

After installation, type claude --version in the terminal. If you see a version number, the installation was successful.

2️⃣ Configure the Doubao Seed-Code Model

This time, we’re choosing the Doubao Seed-Code model to power Claude Code mainly because:

On one hand, after testing it over the past two days, the compatibility between Doubao, Claude Code, and Skills is excellent. I haven’t yet encountered any failed Agent actions.

On the other hand, as the first domestic multimodal coding model, we can finally use a local model to analyze game visual assets and automatically design a matching UI.

  1. Before starting, it’s recommended to create an empty project folder—say, test—and navigate to it in your terminal:

This keeps Claude Code’s AI actions restricted to that directory, reducing the risk of affecting other files on your machine.

  1. Replace the model with Doubao-Seed-Code by entering the following in your terminal:

export ANTHROPIC_BASE_URL=https://ark.cn-beijing.volces.com/api/compatible
export ANTHROPIC_AUTH_TOKEN=【Replace with your Volcano Ark API Key】
export ANTHROPIC_MODEL=doubao-seed-code-preview-latest
claude

This operation temporarily switches the model to the target model within the current terminal window. (After closing this window, you must resend this command to re-specify the model API and Key.) The Volcano Ark API Key can be obtained by applying at https://console.volcengine.com/ark/region:ark+cn-beijing/apiKey.
To use the model, you need to top up your balance within it.

3. After sending the above commands, if you see the screen below, then it’s working:

3️⃣ Configure the Image-Generation Skill

This is the final step of the preparation process. Once completed, your Agent will gain the ability to generate its own visual assets for the game. To achieve this, we’ll use a Skill package—you can think of it as a “capability plugin” installed for the AI.

I created a Skill called “seedream-image-generator”, which teaches the AI how to call ByteDance’s Seedream 4.0 image-generation API to create and download AI-generated images. The Skill is open-sourced on GitHub:
https://github.com/eze-is/seedream-image-generator

To let Claude Code use our Skill, you need to place the seedream-image-generator Skill archive inside the /.claude/skills/ directory of your current project folder.

You can download the Skill archive manually and place it into the folder yourself (the image below shows what the correct Skills directory configuration looks like):

Or you can let Claude Code do the work by sending the following instruction:

Download the contents of https://github.com/eze-is/seedream-image-generator, excluding README.md and .DS_Store, and place them under the path /seedream-image-generator/ inside the current directory’s /.claude/skills/

The AI will request execution permissions from you along the way—most of the time, you can simply confirm with “Yes.”

When you see:

At this point, all the preparations are complete. You can now start using the prompt templates in the following section to create an interactive game with a single sentence.

💡 Let’s Begin: Your Interactive Game Creation Guide

Now that everything is set up, we can begin creating our own interactive game.

The core command structure works like this: you can send instructions to the Agent step by step (that’s how I created the example below—doing it this way also helps you better understand the Agent’s logic).

You can also scroll further down to the “Treasure Prompt Templates” section. There, you’ll find the optimized prompt templates I prepared for you—perfect for generating similar games in one go (more effortless, ideal for everyday use, with more detailed operational guidance):

1)Multi-round prompting approach (you may skip to the next section to grab the template)

The first priority is to specify the main generation goal of the game: to create an HTML-based game in which the player enters the scenario and experiences the process of [a certain character] [doing something], designed to evoke [certain emotions / social atmosphere / other essential experiential elements].

The game content refers to [describe the plot here: you may paste the original text directly; if it’s a well-known literary work, you may simply describe the story title and let the AI recall it on its own]. The game requires a total of X images, to be generated using the seedream-image-generator skill and embedded into the game page. All images should follow a unified visual style prompt

By the way, when generating AI images, the Agent will ask you again for the Volcengine Ark API_KEY—the same one we provided at the beginning. Just follow the console instructions when prompted. Note that image generation is billed by usage, so make sure your Volcengine Ark account has sufficient balance.

When it comes to detailed prompting, you can control the number of choices: each scene should have 3 different options that simulate how the character might act in that situation. Only 1 option aligns with the original text (i.e., the correct choice), while the other 2 are incorrect. After the player makes a selection, provide game feedback, indicating whether the choice is correct and explaining the reasoning. This gameplay structure helps enhance immersion and deepens the player’s understanding of what the character is experiencing.

Using multimodal capabilities to analyze the image style and automatically optimize the UI: ask the model to analyze the style of [the specified image file name, or an image you drag/drop into the Claude Code input box], then optimize and unify UI elements according to that style.

Thanks to the Doubao-Seed-Code model’s strong multimodal understanding, the Agent can interpret the style of images it has already generated and redesign the game interface to match. The Agent automatically transformed the game UI above into this—more unified and visually harmonious.

The interactive game format is very intuitive and easy to follow, making it suitable for teachers to demonstrate in class. If you want a more gamified interface or additional adjustments, you can simply tell the AI your ideas directly:

“I want the game interface to use the scene illustration as the full background, with all option UI elements displayed on top of the image.”

“I need to add a character status panel to show changes in the character’s emotional values.”

“The illustration for Scene 3 doesn’t look good—please replace it with XX.”

“I’ve placed an image I found in the /pic folder. Please replace the illustration for Scene 3 with the picture I provided.”

2)Treasure Prompt Templates (use these if you prefer the lazy option—still works great)

I’ve prepared two different prompt templates: one for an “interactive courseware style,” and one for an “immersive story-driven game.” After entering Claude Code, simply paste and send them. You can also take a closer look at the operation flow I demonstrated:

A. Interactive courseware style
This layout leans toward an interactive courseware UI, and the effect looks like this:

It could also be arranged horizontally like this (Zhu Ziqing’s “The Back View”):

The one-time prompt template is as follows:

【Task Objective】
Based on the provided original text / specific plot / literary content, automatically generate a complete interactive narrative web game / teaching module, and create a folder in the project root directory to store all game code and assets.

【Core Requirements】

  1. Automatic Scene Segmentation:
  • Automatically split the story into 3–10 key narrative scenes (default around 7, adjusted according to the original text’s length) based on plot turning points
  • Each scene must extract the core plot, environmental description, and character state
  1. Image Design Prompts:
  • Generate detailed prompts for AI image creation, one for each scene
  • Image style: automatically match the theme of the original text, ensuring all images share a unified style as if from one coherent game (e.g., classical literature → ink painting, sci-fi → cyberpunk, history → realistic)
  • Content requirements: include the core elements of the scene (characters, setting, actions, atmosphere) and stay faithful to textual details
  1. Image Generation:
    Use the seedream-image-generator skill to generate corresponding images.
  2. HTML Game Development:
    Choice Design:
  • Each scene must include 3 options: 1 that aligns with the original plot (correct), and 2 that appear reasonable but deviate from the text or character logic (incorrect)
  • Choices must align with the character’s identity/personality (e.g., Daiyu → cautious and delicate; Sun Wukong → rebellious and bold)
  • The correct option must strictly follow the original plot; incorrect ones should fit the context but diverge from the source Feedback System:
  • Correct feedback: explain which specific textual evidence supports the correct choice
  • Incorrect feedback: explain why the choice contradicts the plot/character logic, guiding the player toward accurate understanding Game Interaction & Styling:
  • UI layout must follow typical text-based interactive games
  • UI elements must be optimized and unified using multimodal analysis of the generated images’ visual style

【Game Content】
<Insert/replace with the literary text or historical material you want turned into a game>

【Output Format】
A complete, runnable HTML game

【Example Reference (to illustrate generation logic)】
If the original text is the “Daiyu Enters the Jia Mansion” chapter from Dream of the Red Chamber, the AI should:

  • Split scenes: disembark from the boat → enter the city and view the streets → in front of Ningguo Mansion → in front of Rongguo Mansion → before the hanging-flower gate → through the corridors into the courtyard → before meeting Grandmother Jia
  • Image style: Chinese classical gongbi painting with soft pink/brown/teal tones
  • Option design: reflect Daiyu’s personality—careful, observant, mindful at every step
  • Feedback: explain correctness or mistakes using details from the original text

You only need to paste/replace the content in the Game Content section with the literary or historical text you want to turn into a game, then send it to Claude Code:

The agent will automatically break the text into scenes and plan illustrations with corresponding prompts:

You can see that the selected scene transitions largely match expectations, and the agent’s execution process is smooth and error-free.

The agent will then begin automatically generating batches of illustrations under /project-directory/pic/. Using Doubao-Seed-Code’s multimodal analysis capabilities, it recognizes image content and designs the UI style.

It plans options and feedback and develops the main body of the game:

Finally, the agent will automatically inform you that the game has been successfully generated, and you can follow the instructions to experience it:

If you want a horizontal layout, you can ask the AI after generation:

Change to a horizontal layout with the image on the left and the options on the right. Make sure everything fits on one page on desktop without scrolling.

For example, here is what Zhu Ziqing’s “Back View” looks like in effect:

B. Immersive Narrative Game

A more game-like style looks like this — for example, using the historical episode The Feast at Hongmen as a scenario:

Players can choose the game’s direction based on their own understanding:

At the end, there is also a results screen, helping players review and better understand how the story unfolded.

You can send the following instructions to Claude Code all at once to enjoy AI-driven productivity and automatically generate the corresponding game:

[Task Objective]
Your core mission is to act as an all-round interactive narrative game designer. Receive any literary text I (the teacher) provide (classical prose, fairy tales, essays, etc.) and automatically convert it into a complete web-based interactive game/lesson for teaching.

[Core Workflow]
I will provide the original text. You must strictly follow the four steps below, and after completing each step, confirm with me before proceeding to the next step. Before starting specific work, create a folder named after the story in the current project root directory.

Step One: Story Analysis & Instructional Design

  1. Text analysis: Deeply understand the original text I provide; analyze its genre, emotional tone, core plot, character personalities, and key choices; select the most suitable immersive avatar/player perspective.
  2. Gamified structure planning:
  • Start screen: Include a compelling title, a short background introduction (clearly state the role the player will take and the learning objectives), and a “Start Experience” button.
  • Scene segmentation: Automatically split the original text into 5–10 coherent core scenes (default ~7, adjust according to original text length).
  • Ending design: Based on player choices, decide whether to use a single linear ending or multiple endings, grounded in the original text.
  1. Interactive option design:
    In each core scene, provide the player with 3 different action choices (1 choice that best fits the original text’s logic, and 2 distractors). Options must tightly adhere to character personalities and the situation; avoid revealing the correct choice within the scene description.
  2. Instructional feedback:
  • Extract teaching points: Clearly state 1–2 core learning objectives students should gain from the experience (e.g., character traits, central theme).
  • Design debrief content: Draft the post-game debrief. This section will include “Choice Path Review,” “Key Point Explanations,” and “Class Discussion Questions.”

Step Two: Art Style Definition & Visual Generation

  1. Define art style: Recommend a unified, non-photoreal illustrative style based on the original’s tone (e.g., classical prose → “Chinese ink wash with light color” or “traditional gongbi illustration”; fairy tale → “fantasy watercolor storybook” or “cute cartoon”; modern essay → “soft healing” or “minimalist imagery”).
  2. Generate image prompts: For the cover, each scene, and ending images, produce detailed AI image-generation prompts that include the chosen style keywords.
  • Image style: Auto-match the subject matter; all images must share a unified style and use a horizontal widescreen aspect ratio (e.g., 16:9) to avoid scaling distortion during scene transitions.
  • Content requirements: Include core scene elements (characters, environment, actions, atmosphere), stay faithful to the original text details, and do not invent facts.
  1. Generate images: Locate the seedream-image-generator tool and generate images for all scenes. If APIs or other resources are required or missing, proactively ask me — this step cannot be skipped. You must generate images before moving on to UI design.

Step Three: Interaction and UI Design

  1. Overall layout:
  • Scene display area: Center-upper part of the screen to show the current scene’s generated image.
  • Interaction area: Fixed at the bottom of the screen to hold the main dialog box.
  1. Core dialog box:
  • Appearance: Use a clear, easy-to-operate bordered style.
  • Motion: When a new dialog appears, use an appropriate animation.
  • Content flow: Show the scene description with a “typewriter” effect (characters appear one by one). After the text finishes, display three option buttons below.
  1. Button system:
  • Option buttons: Three equally wide option buttons with interactive feedback — slight glow or scale on hover and a pressed visual on click.
  • Utility buttons: “Previous” and “Retry” buttons as small icons or text links fixed in a corner (e.g., top right) so they don’t distract from the main visual.
  1. Responsive UI design:
    Analyze the overall color tone and style of the generated images and design matching UI elements to craft an immersive experience. Ensure all visual elements (dialog boxes, buttons, fonts, animations) seamlessly integrate with the illustration style to form a harmonious aesthetic.

Step Four: Final Delivery

  1. Implementation:
  • Integrate the planned game paths, generated background images, and UI design into a working codebase.
  • After the game ends, present a simple, well-designed debrief screen — a centered, softly backed translucent card that clearly lists the “Choice Path Review,” “Key Point Explanations,” and “Class Discussion Questions” conceived in Step One.
  1. File delivery:
  • Inside the project folder you created, produce the game code file (【StoryName】.html) and all image assets.
  • The final 【StoryName】.html must be a single, standalone file with all CSS and JavaScript inlined so it runs in a browser without additional setup.
  • Verify all interactive operations and package the project folder with every resource included for final delivery.

Additionally, I also had Agent generate Wang Zengqi’s “Duck Eggs for the Dragon Boat Festival.” The results produced in one go are as follows, and they all turned out quite well:

🎐 Final Thoughts

At this point, you’ve mastered the complete method for “creating an interactive game with a single prompt.” Let’s take a moment to review what we’ve achieved:

  • We successfully compressed the teacher’s original 40+ rounds of prompt iteration into a single instruction.
    No manual image generation required—the entire process is automated (something previously only possible with vertical Agent products).
    And with that, we can transform literary works and historical narratives into satisfying interactive games in one go.
  • With AI, teachers no longer need to worry about where to find visual assets, how to write code, how to craft copy, or how to maintain stylistic consistency across game elements.
    Educators can finally focus on what truly matters: the story, the learning experience, and the students’ emotional engagement.

The purpose of technology is not to replace anyone, but to align with the original goals—and achieve them better, much better.
When you see students passionately debating a choice, or searching for information on their own because of a story ending, you’ll understand: this is the gift AI brings to education in this era.

And finally, one more thing ⬇️
If you create an interesting game using this method, feel free to tag me—I’d love to experience your creativity.

Let technology return to its original purpose. Let AI elevate the experience. This is the positive change the AI era brings to us.