The Magic of Gungtong Bogam — Prompt Engineering and Reference Data
"Just Write It Well" Doesn't Work
Once the rule engine was complete, structured analysis data started flowing out as JSON. Five Elements ratios, Ten Gods (Sipsin) relationships, the Favorable Element (Yongsin), combinations and clashes. Accurate data. Now we needed to hand it to AI and transform it into "human-readable interpretation." At first, I thought it would be simple: pass the data and say "interpret this well."
The result was disappointing. "Wood is strong. Your personality is upright and straightforward." Not technically wrong, but generic enough to apply to almost anyone. It was miles away from the depth you'd expect from a professional Saju app.
From Analysis Data to Prompt
The first improvement was structuring the prompt. I organized the information passed to AI systematically.
The core data included in the prompt: all four pillars with their Heavenly Stems (Cheongan) and Earthly Branches (Jiji), the identity of the Day Master, the Ten Gods relationship at each position, hidden Five Elements and Ten Gods within the Hidden Stems (Jijanggan), Five Elements ratios in percentages, the determined Favorable Element and its reasoning, major combination and clash relationships, the 12 Life Stages arrangement, and gender.
The principle was singular: never ask AI to calculate. If the Five Elements ratio is Wood 30%, Fire 20%, Earth 15%, Metal 25%, Water 10%, that's passed as a pre-computed result. AI is only asked to explain what this ratio means for this person's life.
With structured data in hand, interpretation accuracy improved. Given concrete data like "2 Indirect Wealth, 1 Direct Wealth, Favorable Element is Metal," AI started producing contextual interpretations such as "You have entrepreneurial instincts, but maintaining a stable income alongside ventures would be wise." But the depth was still lacking.
System Prompt: A 40-Year Veteran Expert
The second improvement was the system prompt — clearly defining AI's role.
"You are a master of Saju with 40 years of experience. You are deeply versed in traditional Four Pillars theory yet can explain concepts in language that modern people easily understand."
This role definition alone made a visible difference in tone and expertise. "Wood is strong" became "Your Day Master, Gap-Wood, is like a tree rooting itself in spring." AI naturally began using proper terminology while adding explanatory notes in parentheses.
Beyond the role, the system prompt included critical constraints.
First: "Interpret based only on the provided analysis data." This prevents the AI from inventing information not present in the data (hallucination). If the rule engine calculated 2 Indirect Wealth stars, the AI must not say "With 3 Indirect Wealth stars, you have overwhelming financial energy."
Second: "Avoid fatalistic statements; frame everything as tendencies and possibilities." This was an ethical decision. "You will never have financial luck in your life" is a dangerous sentence. Some people make major life decisions based on Saju readings. "Your Indirect Wealth is on the lighter side, so stable asset management may serve you better than speculative investments" was the tone we wanted.
Third: "Do not infer or create content not present in the analysis data." For example, if the data contains no Major Fate Cycle (Daewoon) information, the AI must not say "A major life change is coming in your 40s." This constraint is a critical safeguard for the hybrid architecture's reliability.
What Gungtong Bogam Changed
At this point, interpretations were "decent." But compared to professional Saju apps, something was still missing — the depth to pinpoint what makes a particular chart unique.
The turning point came when we added Gungtong Bogam data as a reference in the prompt.
Gungtong Bogam is one of the classical texts of Four Pillars study. It provides detailed descriptions for all 120 combinations of 10 Day Masters and 12 Month Branches — specifying which elements are needed for each Day Master born in each season, and what characteristics they tend to exhibit. It's the core reference for the Seasonal Regulation method.
Before adding this data, interpretations read like this: "Wood is strong, so your personality is upright and active. You get along well with others and have leadership qualities." Textbook material that could apply to anyone.
After adding Gungtong Bogam data, interpretations transformed: "A Gap-Wood Day Master born in winter is like a tree standing on frozen ground. The roots run deep, but energy is suppressed — the warmth of Fire is urgently needed. Passionate, warm-hearted people or vibrant environments will breathe life into you."
Can you feel the difference? The same "Gap-Wood" yields completely different interpretations depending on whether the person was born in summer or winter. This is the core value of Gungtong Bogam, and by referencing this data, AI could generate interpretations that reflect the critical context of "season meets Day Master."
"Interpret this well" as an instruction pales in comparison to "Interpret this using this reference data." Through this process, I viscerally understood that the essence of prompt engineering isn't the sophistication of instructions — it's the quality of reference data.
Seven Categories, Separate Prompts
With interpretation quality solved, the next challenge was structure.
Initially, I requested a comprehensive interpretation in a single prompt. The result was too long. Nobody reads a 2,000-character interpretation in one sitting. And some users want to know about personality but couldn't care less about health.
We defined seven categories: Innate Personality, Career Aptitude, Financial Fortune, Health, Interpersonal Relationships, Major Fate Cycle Timing, and Comprehensive Interpretation. Each category is called via an independent prompt.
Users see the Comprehensive Interpretation first. If they want to dig deeper into a specific area, they tap a tab, and that category's detailed interpretation is generated on-demand. Generating all categories at once would be expensive and slow. There's no reason to pre-generate categories the user may never look at.
This on-demand approach improved both UX and cost. Users get the information they want quickly, and the service avoids unnecessary API calls.
Cost Optimization: Model Routing and Caching
API costs are a real concern in production. Especially for a service like ours where prompts are long — analysis data plus Gungtong Bogam reference plus system prompt — token costs accumulate quickly.
We applied three optimization strategies.
First, model routing. For the Comprehensive Interpretation, which demands depth, we use a high-performance model (Sonnet-tier). For individual category deep-dives, we use a lightweight model (Haiku-tier). In practice, categories like "Personality Analysis" or "Financial Fortune" produced satisfactory quality with lightweight models when sufficient reference data was provided. The Comprehensive Interpretation clearly benefited from the higher-tier model due to the cross-analysis required.
Second, local caching. We cache interpretation results for the same birth data (date, time, gender) in the browser's local storage. If a user looks up the same chart again, we serve the cached result without an API call. Since many users repeatedly check their own chart, this strategy had substantial impact.
Third, leveraging Groq API via Cloudflare Workers. Groq runs LLaMA-based models at extremely fast inference speeds and offers a free tier. Routing through Groq instead of calling Claude API directly allowed us to dramatically reduce costs while achieving snappy streaming response times.
Combining all three strategies, we were able to bring the cost per comprehensive Saju interpretation down to a level where running a free service didn't create financial pressure.
Streaming: Perceived Speed
Just as important as cost is latency. AI interpretation takes several seconds at minimum. Staring at a loading spinner for five seconds versus watching interpretation appear sentence by sentence are completely different experiences.
We used Groq API's streaming capability to display interpretation text token-by-token in real time. Technically, this uses Server-Sent Events (SSE). Cloudflare Workers receive the streaming response from Groq and relay it directly to the client.
From the user's perspective, text starts appearing almost immediately after pressing the button. The Time to First Token (TTFT) is very short, so users can begin reading before the full interpretation is complete.
This streaming approach fundamentally reframed the concept of "waiting." Users aren't waiting — they're watching the interpretation unfold, as if a human expert were reading their chart in real time. A technical implementation that transformed the emotional quality of the user experience.
What This Process Taught Me
The biggest lesson about prompt engineering was this: the most effective way to improve AI output quality isn't instructing it to "write better" — it's providing better reference data.
Gungtong Bogam data proved it. Same system prompt, same analysis data, same model — but the difference between having Gungtong Bogam as a reference and not having it was dramatic. No amount of polishing the instruction text could match the quality improvement that one reference dataset brought.
This principle extends beyond the Saju project to AI usage in general. When asking for translation, "Translate naturally" is less effective than "Translate using this domain glossary." When asking for code, "Write clean code" is less effective than "Write code following this design document."
Coming Up Next
The analysis engine and AI interpretation layer are complete. Now it's time to show it all to the user. In the next installment, we'll cover how the Five Elements color system became the app's visual language, the visual design of the Chart Table, and the balancing act between Eastern aesthetics and modern design.
댓글
댓글 쓰기