- ๐งญ Summary
- ๐ Timeline
- ๐ Before vs After (Terminal Evidence)
- โ ๏ธ Error Snapshot
- ๐ง Fix Implementation
- โ Verification
- ๐ Lessons Learned
- ๐ Next Steps
Context: Restoring the kor2Unity Korean learning stack so the model returns culturally accurate responses instead of generic GAN essays. All work performed inside the
minigpt4
conda environment on the kor2fix repo.
๐งญ Summary
- Confirmed Ollama (Mistral) was unreachable while the self-hosted FastAPI fallback still answered requests.
- Replaced the backend with the โImmediate Korean Knowledgeโ variant generated by
fix_korean_immediate.py
and synced the TUI client. - Rebuilt and restarted the Docker backend, then validated conversational outputs through curls, the TUI, and automated demo tests.
๐ Timeline
Time (KST) | Step | Notes |
---|---|---|
10:02 |
kt startup |
Auto-activated minigpt4 , banner showed Ollama connection failures and fallback to legacy API. |
10:04 | Conversation probe | Prompted ์๋
; transformer still produced off-topic GAN essay. |
10:10 | Script review | Grepped kor2unity_tui.py to verify fallback order and payload shape. |
10:15 | Hotfix generation | Ran python fix_korean_immediate.py ; copied patched backend and TUI into place. |
10:18 | Docker restart |
docker-compose restart backend then cold restart to ensure clean boot. |
10:23 | Regression test | Curl with context: korean_mode still produced nonsenseโtriggered full rebuild. |
10:28 | Rebuild & deploy |
docker-compose build backend && docker-compose up -d backend . |
10:33 | Validation | Curl responses now delivered structured pronunciation guides; TUI timeout smoke test and demo_korean_success.py all green. |
Docker registry after the fix: rebuilt kor2unity-backend:dev
(52 minutes old) alongside Ollama 0.1.48, Rasa services, and supporting containers.
nigpt4 ๎ฐ โฆ/uc/en/kor2fix ๎ฐ docker logs kor2fix-backend-1 --tail 10
INFO: 172.82.0.1:57400 - "GET /health HTTP/1.1" 200 OK
INFO: Shutting down
INFO: Waiting for application shutdown.
INFO: Application shutdown complete.
INFO: Finished server process [1]
INFO: Started server process [1]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:8201 (Press CTRL+C to quit)
INFO: 172.82.0.1:37058 - "POST /api/llm/chat HTTP/1.1" 200 OK
(e) minigpt4 ๎ฐ โฆ/uc/en/kor2fix ๎ฐ cd /mnt/d/repos/aiegoo/uconGPT/eng2Fix/kor2fix && head -5 backend/api/main.py
"""
Immediate Korean Knowledge Backend - Instant accurate Korean learning responses
"""
from fastapi import FastAPI, HTTPException
๐ Before vs After (Terminal Evidence)
Before โ Failing sessions
๐ Auto-activating conda environment: minigpt4
๐ฎ Korean TUI...
============================================================
๐ฐ๐ท kor2Unity - Korean Language Learning Assistant
๐ง Self-hosted Llama 2 7B-HF + MiniGPT-4
๐ฅ Environment: minigpt4 conda
============================================================
๐ Checking available AI services...
โ ๏ธ Ollama API not available: HTTPConnectionPool(host='localhost', port=11434)...
โ
Legacy FastAPI available
โ
Connected to self-hosted AI!
๐ฌ You: ์๋
๐ค Thinking... (using Ollama Mistral)
๐ญ Response (11.7s):
----------------------------------------
Dear Admin,
Today's article is about the latest developments in the field of AI...
----------------------------------------
Issue: the client still believed it was on the Ollama path, so the generic English essay came back even though only the legacy FastAPI endpoint remained.
Restarting the container and retrying the curl produced equally garbled โtranslationsโ:
(kor2fix) $ docker-compose restart backend
[+] Restarting 1/1
โ Container kor2fix-backend-1 Started
(kor2fix) $ sleep 3 && curl ... 'How do you say hello in Korean?'
In Korean, the most common way to say "hello" is "gaejungae sari-un" (๋์ฃผ์ธ๋ง)...
(kor2fix) $ sleep 5 && curl ... 'How do you say hello in Korean?'
In Korean, the word "hi" or "hai" is called "hello." Here's how to say it in Korean:
- "๋ฐ๊ฐ์ผ๋ก" (bangguro) - "Hello"
- "์๋
" (anbulgwa) - "Morning"
...
Diagnosis: a simple restart wasnโt enough; the container was still serving the stale model bundle.
Corrective steps (in order)
(kor2fix) $ python scripts/fix_korean_immediate.py
โ
TUI model display fixed!
โ
Immediate Korean knowledge backend created!
(kor2fix) $ cp backend/api/main_knowledge.py backend/api/main.py
(kor2fix) $ cp scripts/kor2unity_tui_fixed.py scripts/kor2unity_tui.py
(kor2fix) $ docker-compose restart backend
When the restart alone still produced garbled Hangul, I rebuilt the image:
(kor2fix) $ docker-compose build backend
(kor2fix) $ docker-compose up -d backend
When the backend still reported a placeholder echo (because Ollama wasnโt reachable), I patched the service hostname and repeated the build:
(kor2fix) $ python fix_ollama_url.py
โ
Fixed Ollama URL to use Docker container hostname
(kor2fix) $ docker-compose build backend && docker-compose up -d backend
[+] Building ... kor2unity-backend:dev Built
[+] Running ... kor2fix-backend-1 Started
After โ Clean Korean responses
(kor2fix) $ curl -s -X POST http://localhost:8201/api/llm/chat \
-H "Content-Type: application/json" \
-d '{"message": "How do you say hello in Korean?", "context": "korean_mode"}'
{"response":"๐ฐ๐ท Korean Greetings - Complete Guide!...","model":"Korean Knowledge Base v5.0"}
(kor2fix) $ curl -s -X POST http://localhost:8201/api/llm/chat \
-H "Content-Type: application/json" \
-d '{"message": "์๋
", "context": "korean_mode"}' | jq -r '.response'
๐ฐ๐ท Nice! You said "์๋
" (Hi/Bye!)
...
(kor2fix) $ echo '/korean' | timeout 15s kt
TUI test completed successfully
(kor2fix) $ python scripts/demo_korean_success.py
๐ KOREAN LEARNING AI - FINAL SUCCESS! ๐ฐ๐ท
... all regression tests passed ...
Result: backend now surfaces the curated Korean knowledge base, pronunciation guides, and cultural context with sub-second latency.
The demo script also reiterates the learner guide that now ships with the platform:
๐ Learning Commands:
"How do you say hello in Korean?"
"Teach me Korean thank you"
"What does ์๋
ํ์ธ์ mean?"
๐ฐ๐ท Korean Practice:
์๋
# Casual greeting
์๋
ํ์ธ์ # Formal greeting
๊ฐ์ฌํฉ๋๋ค # Thank you
์ถ์ ์๋ณด๋์ด? # Chuseok question
โจ Features:
โข Instant accurate responses
โข Pronunciation guides (romanization)
โข Cultural context explanations
โข Formal vs casual usage
โข Real Korean conversation practice
โ ๏ธ Error Snapshot
โ ๏ธ Ollama API not available: HTTPConnectionPool(host='localhost', port=11434): Max retries exceeded...
Early model behavior
Dear Admin,
Today's article is about the latest developments in the field of AI...
The fallback still advertised โOllama Mistralโ because the client switched APIs without updating api_type
, which is why the response tone never changed.
๐ง Fix Implementation
Backend swap
cd /mnt/d/repos/aiegoo/uconGPT/eng2Fix/kor2fix
python scripts/fix_korean_immediate.py
cp backend/api/main_knowledge.py backend/api/main.py
cp scripts/kor2unity_tui_fixed.py scripts/kor2unity_tui.py
Container cycle
docker-compose restart backend
sleep 5
curl -s -X POST http://localhost:8201/api/llm/chat \
-H "Content-Type: application/json" \
-d '{"message": "How do you say hello in Korean?", "context": "korean_mode"}' | jq -r '.response'
When the first restart still produced garbled Korean, I rebuilt the image to ensure the new FastAPI module shipped with the container:
docker-compose stop backend
sleep 2
docker-compose up -d backend
sleep 5
curl -s -X POST http://localhost:8201/api/llm/chat \
-H "Content-Type: application/json" \
-d '{"message": "How do you say hello in Korean?", "context": "korean_mode"}' | jq -r '.response'
Final rebuild
docker-compose build backend
docker-compose up -d backend
โ Verification
Health check responses
curl http://localhost:8201/ | jq
{
"message": "Welcome to kor2Unity Korean Learning API!",
"version": "2.0.0-python313",
"status": "running"
}
Conversational probes
curl -s -X POST http://localhost:8201/api/llm/chat \
-H "Content-Type: application/json" \
-d '{"message": "์๋
", "context": "korean_mode"}' | jq -r '.response'
๐ฐ๐ท Nice! You said โ์๋ โ โฆ
TUI smoke test
echo '/korean' | timeout 15s kt
Automated regression
python scripts/demo_korean_success.py
Test | Expectation | Result |
---|---|---|
Greeting lesson | Formal ์๋
ํ์ธ์ guide |
โ Passed |
Casual greeting | Explains ์๋
usage |
โ Passed |
Thanks lesson | Highlights ๊ฐ์ฌํฉ๋๋ค
|
โ Passed |
Chuseok context | Cultural explanation | โ Passed |
๐ Lessons Learned
- Always rebuild the Docker image when replacing backend modulesโhot copies can leave stale layers.
- Keep the TUI
api_type
in sync with the selected endpoint to avoid misleading telemetry. - Automated demos provide quick confidence that pronunciation guides and cultural context remain intact.
๐ Next Steps
- Re-enable Ollama once the local daemon is reachable to recover multimodal support.
- Add a health endpoint in the TUI banner so users see which model is active before chatting.