Retrospective — From Tarot to Saju, the Evolution of AI Collaboration
Two projects are done. 20 installments for Tarot Master, 20 for Four Pillars of Destiny (Saju). Across 40 total dev diary entries, my approach to AI collaboration kept evolving. It started with "wait, this actually works?" and ended with "how did I ever do this without it?" In this final installment, I look back at the journey.
Tarot — Discovering Speed
What I discovered in the Tarot project was speed. How fast you can ship code when collaborating with AI. Manually entering data for 78 tarot cards would have taken days — AI generated it in hours. Reading logic, UI components, state management — features that traditionally take 2-3 weeks were running in days.
AI collaboration in the Tarot project was a "turbo booster." A tool that made things I already knew how to do dramatically faster. It felt like an extension of my existing development workflow, just with the speed dial cranked up.
The key discovery at this stage: AI is overwhelmingly efficient at repetitive tasks. Mass data generation like 78 card entries, creating multiple components with similar patterns, library setup, boilerplate code.
Saju — Discovering Depth
What I discovered in the Saju project was depth. With AI, you can reach deep into domains that would be daunting to approach alone.
Four Pillars study was a domain of a completely different order from Tarot. Thousands of years of history, diverse schools of thought, complex relationship systems. Tackling this domain solo, domain learning alone would have taken months. With AI, I could map the domain's terrain in 10 minutes and learn simultaneously while implementing each element's deep rules.
AI collaboration in the Saju project went beyond "turbo booster" to "exploration partner." A companion for navigating unfamiliar territory together. Not just speed — the achievable depth itself changed.
Same tool, different lessons — because the projects differed in nature. Tarot was AI accelerating what I already knew (web development). Saju was AI guiding me through what I didn't know (traditional East Asian metaphysics). AI collaboration's true value surfaces more powerfully in the latter scenario.
5 Things AI Did Well
Across both projects, here are the five areas where AI particularly excelled.
First, domain structuring. Rapidly mapping the vast Four Pillars domain, organizing relationships between elements. "Heavenly Stems (Cheongan) work like this, Five Elements connect this way, Ten Gods (Sipsin) emerge from these relationships." AI's efficiency in this domain mapping work was overwhelming.
Second, mass table generation. The 60 Sexagenary Cycle table, Ten Gods relationship tables, the納音 (Naeum) Five Elements table, 12 Life Stages table, Tojeong Bigyeol's 144 hexagram interpretations — AI's speed at generating large volumes of structured data was beyond human capability. Post-generation verification is always necessary, but the sheer draft generation speed moved the entire project timeline forward.
Third, library comparison and technology selection. When choosing korean-lunar-calendar, AI quickly compared multiple lunar conversion libraries' pros and cons. For PWA setup, PDF generation (jspdf+html2canvas), state management (Zustand), and other technology stack decisions, AI's role in summarizing trade-offs for each option was consistently useful.
Fourth, prompt design. We collaboratively designed the prompts that turn Saju analysis results into natural language. AI's structural proposals for meta-level prompt design — "given this Saju data, what structure should the interpretation text follow?" — were adopted multiple times.
Fifth, code translation. React component structure transformation, TypeScript type definitions, refactoring existing code — AI was accurate and fast at "converting something that exists into a different form." It showed particular strength in translation-type tasks: design document to code, code to test.
4 Things Humans Had to Do
Conversely, there were areas where AI couldn't be trusted and human judgment was essential.
First, school-of-thought selection. Whether to apply the Late Night Hour (Yajasi) rule, whether to use True Solar Time. These aren't technical decisions — they're domain expert judgments. AI explains both sides when asked, but the final call on where to set the default is a human decision. Our choice to "let the user decide" was itself a human judgment.
Second, final data verification. AI-generated tables for the 60 Sexagenary Cycle, Ten Gods relationships, etc. were mostly accurate but not 100%. Subtle errors crept in, and catching them required domain-knowledgeable humans reviewing carefully. AI occasionally mixed rules from different schools of thought, generating incorrect hybrid data.
Third, UX micro-adjustments. Where to place the school settings badges, how to order Five Elements quiz questions, what information to show first on the results screen. These UX judgments require understanding user emotions and behavior — a sensibility where humans still outperform AI.
Fourth, ethical decisions. Whether to include a "no scientific basis" disclaimer on MBTI-Day Master mapping, how much negativity to allow in daily fortune messages like "be careful today," how to handle negative interpretations for living public figures in celebrity Saju analyses. AI can propose answers, but final ethical decisions belong to humans.
6 AI Collaboration Methodologies — Synthesized from Two Projects
Here are the six AI collaboration methodologies distilled from 40 dev diary entries.
Number 1: Map the domain first. At project start, ask AI for the domain's complete structure. Element inventory, relationship map, complexity assessment. This becomes the compass for every subsequent decision.
Number 2: Write design documents before writing code. Especially in complex domains, design documents serve as persistent storage for AI context. The clearer the document, the higher AI's implementation quality.
Number 3: Ask AI "why." "Write this code" is less valuable than "What approaches exist for this problem? What are the trade-offs of each?" AI's value lies more in presenting options and analyzing trade-offs than in raw code generation.
Number 4: Always verify AI's output. Especially domain data. AI-generated tables, rules, and text are "excellent drafts," not "final products." Publishing without verification lets subtle errors erode user trust.
Number 5: Invest in modularization. When collaborating with AI, a modularized codebase creates the efficiency of "you only need to understand this module's context." Instead of making AI comprehend the entire codebase, load just the relevant module's design document.
Number 6: Treat AI as a collaborator, not a tool. Not "write this" but "how should we approach this problem?" Provide AI with sufficient context, ask for opinions, critically examine proposals. This is the core of AI collaboration.
"AI Is Not a Coding Tool — It's a Project Collaborator"
There's one message I wanted to convey throughout this series: the shift from viewing AI as a "coding tool" to viewing it as a "project collaborator."
As a tool, AI's value is limited to "writing code faster." As a collaborator, you can engage AI across nearly every project phase — domain research, design discussions, code writing, code review, test design, documentation.
This perspective shift becomes more important as project complexity increases. In a simple project like Tarot, using AI as a tool sufficed. In a complex project like Saju, failing to treat AI as a collaborator means utilizing less than half its potential.
Treating AI as a collaborator means concretely: providing sufficient context — loading design documents, explaining the project's background. Asking for opinions — "Is there an alternative to this approach?" Critically evaluating — not accepting every AI suggestion but asking "Is this really the best option?" And recognizing where AI falls short — ethical judgment, UX intuition, and final decision-making remain human responsibilities.
The Biggest Thing AI Changed in 28 Years of Development
I've been developing since 1998. Twenty-eight years. Across that long span, I've witnessed countless technological shifts. The rise of the web, the mobile explosion, the cloud's emergence, microservices architecture. But what AI collaboration has changed is qualitatively different from any previous technological shift.
Previous shifts changed "how you build." From server-side rendering to client-side, from monolith to microservices. The method of building changed.
AI collaboration changed "what you can build." Four Pillars of Destiny? Even a 28-year veteran developer would have hesitated to tackle this solo. The domain is too vast and complex. With AI, I could take the leap. And I actually built it.
This is the biggest thing AI changed: expanding an individual developer's reach. Being able to attempt projects that seemed impossible alone, complete them, and learn from them. Not a change in technology — a change in what's possible.
Closing — Wrapping Up the Series
Twenty installments for Tarot Master, twenty for Saju. Throughout 40 dev diary entries, the thought I returned to most was "Is this actually helping the people who read it?"
Writing a dev diary is partly about recording, but also about sharing. The trial and error, the patterns discovered, the lessons learned while attempting this new approach called AI-collaborative development — I hope these have been at least somewhat useful to others considering similar challenges.
One thing these two projects solidified into conviction: AI collaboration isn't a temporary trend — it's a new paradigm in development. We're still in the early stages, and both tools and methodologies continue to evolve. But the direction is clear. AI doesn't replace the developer's role — it expands the range of what a developer can reach.
I don't know yet what my next project will be. But one thing is certain: it will be with AI, and I'll learn something new along the way.
To everyone who read all 40 installments to the end — my sincere thanks.
댓글
댓글 쓰기