What Happens When You Hand AI a Scenario

 


Throughout this project, the area where I talked most with AI was scenario design. System architecture, character settings, conflict patterns, time structure. AI was used in every step.

I want to leave an honest review. What AI does well, what it cannot do, and how to use it effectively.

What AI Does Well

Organizing structure. This is overwhelming. Say "I want to design an emotion system," and it immediately proposes a systematic variable structure with layer separation. Information organization that would take a person days is done in minutes. The skeleton of the three-layer emotion model came out within an hour of conversation with AI.

Verifying consistency between variables. It answers logically to questions like "If trust goes up in this event, what should happen to tension?" When variables multiply, humans start missing things; AI does not.

Presenting patterns. For requests like "Classify the types of conflict" or "Organize the patterns of ending structures," it rapidly lists a broad range of patterns. You do not have to start from scratch. Pick two or three from the 10 patterns AI provides and dig deep from there.

Code generation. It quickly produces Ren'Py event functions, emotion-change logic, and conditional branching code. Extremely useful as a draft. When translating the three-layer emotion model into code in Part 9, the AI built the basic structure in 30 minutes. Including the interaction logic between the disposition layer, momentary emotion layer, and relationship layer. Of course, fine-tuning the numbers had to be done manually, but the speed of establishing the skeleton was overwhelming.

What AI Cannot Do

Judging emotional weight. "Which of these two choices hurts more?" AI analyzes logically, but it cannot judge by feel. Determining which carries more weight in a given context — "Speaking honestly" versus "Remaining silent" — is a judgment that comes from experience.

Avoiding cliches. Ask AI to "write me a scenario," and there is a high probability of familiar patterns emerging. A rooftop confession, a coincidental reunion, sharing an umbrella on a rainy day. Individual elements are not bad, but the combination is predictable. AI tends to suggest "what appears most frequently" before "what is uncommon."

Nuanced dialogue. Dialogue written by AI is grammatically correct and makes sense. But lines at the level of "this single sentence pierces your heart" do not come easily. The power of dialogue comes from negative space and nuance, and AI has a tendency to fill in negative space.

When I actually asked AI for the final line of a breakup scene, it produced: "I truly liked you. But I guess our timing was never right." Grammatically perfect and semantically accurate. But in real life, people do not speak like this. Words break off, they dodge the point, they say something irrelevant and regret it later. A simple "Yeah. Take care" can hurt more than "I truly liked you." AI still cannot capture this difference.

Explaining "why this is sad." It can explain why something is a structurally sad pattern. But it cannot explain "why my heart aches at this specific scene." It handles the general theory of emotion well, but it is weak on the specificity of emotion. For example, ask AI "Why are relationships with mismatched timing sad?" and it will accurately explain "The sense of loss created by the gap between expectation and reality." But ask "Why does it hurt when I think about the scene where I turned around without saying 'Let's walk together' in middle school?" and it repeats similar generalities. The unique pain of that particular scene — the weight of feet turning away, the sensation of air swallowed instead of words — is not AI's territory.

Effective Methods

Methods discovered through this project for working on scenarios with AI.

Do not request a finished product. Request ingredients. "Write me a scenario" is far worse than "Give me 10 possible conflict patterns for this situation." AI is stronger at providing ingredients than finished works. Pick two or three from the 10 and combine them yourself.

Carve with questions. "Create this character" is less effective than "When does this character lie?" or "When is this character most irrational?" Questions are the chisel that carves the character.

Append "why?" to the AI's answers. When AI suggests "this would be good," you must append "Why?" When you make it explain the reason, the AI's answer deepens, and simultaneously that reasoning helps your design.

When cliches emerge, ask for the opposite. If AI says "A rooftop confession would be good," ask "If it happens somewhere other than the rooftop, how does the feeling change?" When you push AI to explore other options, unexpected answers emerge. When I actually asked this, the result was "A situation where they accidentally run into each other while cleaning out an empty classroom." Far more everyday than a rooftop confession, and therefore more realistic. This "asking for the opposite" technique was also used in the conflict design of Part 14. "What methods are there besides villain-driven conflict?" That question drew out the core idea of collisions between good intentions.

AI does the draft, humans do the feel. You need to catch the parts in AI's draft that are logically correct but emotionally wrong. This division of roles is the most efficient approach.

This entire series is, in fact, a demonstration of this principle. The skeleton of the three-layer emotion model in Part 9 was proposed by AI, and the context system structure in Part 11 was also drafted by AI. But judging "why 'Are you okay?' sounds different on a rainy night" came from my experience. The collision-of-good-intentions conflict in Part 14 is the same. AI lists the patterns, and the human distinguishes "this one would truly hurt" from "this one is structurally correct but emotionally empty."

What I Learned from Conversations with AI

In this project, AI was closer to a conversation partner than a collaborator. Its role was to organize my thoughts, point out missing parts, and broaden possibilities. It does not create the scenario itself, but it aids the process of creating a scenario.

What was particularly striking was the experience of my thoughts becoming clearer through conversation with AI. Things that blur together when I think alone become concrete when I try to explain them to AI. To say "This character is this kind of person" to AI, I first have to know precisely myself.

This is similar to rubber duck debugging. The phenomenon where the answer reveals itself as you explain the problem. The advantage AI has over a rubber duck is that it actually returns meaningful questions.

To give one example, the core concept of "dissonance between action and inner self" in Part 13's character design emerged because I complained to the AI "I don't know why this character is compelling." The AI asked "Are this character's actions and emotions aligned?" and that question set the direction. AI did not give the answer — the question moved my thinking. This experience reveals the essence of AI collaboration. AI is a catalyst for thought. Not the reaction itself, but the role that triggers the reaction.

In the next part, we address the problem of "cheesiness." Building narrative through realistic relationships rather than fantasy settings, through "the accumulation of choices" rather than "special events."


Next: Stripping Away Cheesiness — Reality-Based Scenarios

댓글

이 블로그의 인기 게시물

사랑을 직접 올리지 않는 설계

감정을 변수로 옮기다 — 3계층 감정 모델

시작의 충동 — "타로 웹앱을 만들어볼까?"