Building an LLM Robot with My Son — EP 4. Choosing the Right Local LLM for Robot Control
Building an LLM Robot with My Son — EP 4. Choosing the Right Local LLM for Robot Control We needed to pick a model. Connecting a local LLM to the robot means committing to a specific open-source model. If we were using a cloud API, this decision would be trivial — just call GPT-4o or Claude. But our architecture runs a local LLM server on the home LAN. We had to test and decide ourselves. I set three evaluation criteria. Tool use — to send structured commands like "forward" or "stop," the model needs to reliably call JSON functions. If it sometimes returns proper JSON and sometimes writes prose explanations, parsing fails. Consistency matters more than peak performance. Korean language — my son gives instructions in Korean, and I want to read debug output in Korean. A model that drifts into English mid-response is just harder to use. Vision — we don't need it now, but we'll need camera frame input later. If the model has a vision variant in the same...