TL;DR
South Korea's LG AI Research and KETI will pilot an EXAONE 4.5-powered platform that automates safety report triage at a scale of 10 million cases per year.
South Korea's public safety bureaucracy runs on keyword sorting, a brittle system that chokes on typos and vague citizen descriptions. That changes this year. UPI reports that LG AI Research and the state-backed Korea Electronics Technology Institute (KETI) have partnered to deploy an artificial intelligence platform capable of processing more than 10 million safety-related civil complaints annually, automating the full pipeline from intake to final response without human routing.
The system is powered by EXAONE 4.5, LG's proprietary large language model. A pilot service is expected before the end of 2026.
The bottleneck problem
Korea's existing reporting infrastructure relies on keyword matching to sort incoming complaints. The failure mode is predictable: citizen reports arrive with misspellings, shorthand, and ambiguous descriptions, and keyword systems degrade precisely when accuracy matters most. The new platform replaces this with image analysis and semantic classification, routing critical safety issues in real time.
The 10-million-report capacity figure is not an aspirational benchmark. It reflects the actual annual volume of safety-related civil complaints South Korean citizens file, making this a full replacement for the current infrastructure rather than a supplement.
KETI chief Shin Hee-dong framed the collaboration in terms of administrative efficiency: reduced operational costs alongside stronger citizen safety outcomes. LG AI Research head Lim Woo-hyung emphasized speed and quality of safety administration as directly linked to quality of life. Both framings reflect the same pressure governments worldwide face: higher service volumes without proportional budget increases.
What EXAONE 4.5 handles
EXAONE 4.5 performs three distinct functions: intake parsing to interpret poorly worded or misspelled reports, image classification to analyze attached photos and identify hazard types, and real-time routing to the appropriate agency. Together, these replace a pipeline that previously required human staff at multiple handoff points.
Practitioners sometimes call this kind of work "boring AI," meaning a narrowly scoped system solving a defined problem against a measurable target. That is not a criticism. Boring AI is often the most durable precisely because it can be evaluated against a clear criterion. The 10-million-report figure gives both the project team and external reviewers a concrete benchmark once the pilot goes live.
The broader picture
LG AI Research was founded in 2020 as the AI coordination unit for LG Group. EXAONE has been positioned as an enterprise-grade model with particular strength in Korean-language tasks, competing in a space where international labs have limited domain-specific depth for non-English contexts. A high-volume, government-facing deployment gives LG a live operational benchmark that leaderboard comparisons tracked on llm-stats.com cannot replicate.
The regulatory environment adds urgency. As the European artificial intelligence act establishes documentation and audit requirements for high-risk AI systems, South Korean regulators are watching government-adjacent deployments with comparable scrutiny. A safety reporting platform that misroutes a critical complaint carries direct liability, which makes confidence thresholds and fallback protocols significant design decisions rather than technical footnotes.
Domain-specific artificial intelligence deployments are finding faster institutional uptake than general-purpose tools across the industry. SiliconAngle reported this week that Anthropic's Claude Security, a targeted code vulnerability scanner, moved from preview to public beta after adoption by hundreds of organizations. The pattern is consistent: when AI solves a narrow, high-volume problem with clear performance criteria, procurement cycles shorten. LG and KETI are betting EXAONE can replicate that dynamic in public safety classification.
CNET has tracked how OpenAI, Anthropic, and DeepSeek are competing for institutional markets with increasingly capable models. LG's approach is differentiated: rather than competing on general capability benchmarks, EXAONE is being tested against throughput and real-world error rate in a live civil service context. LG Corp. shares gained 0.81% on the Seoul bourse the day of the announcement, consistent with market recognition of a credible contract rather than speculative upside.
If LG and KETI publish structured results from the pilot phase, this deployment could become a reference case for government AI rollouts globally. The central question for practitioners evaluating similar systems: what are the confidence thresholds before a report triggers a human fallback, and will those error rates be made public?
FAQ
What is EXAONE 4.5?
EXAONE 4.5 is LG AI Research's proprietary large language model, used here to parse, classify, and route Korean-language government safety complaints at scale.
How does the system handle image analysis?
It analyzes photos attached to citizen complaints to identify the type of safety hazard and route the report to the correct agency without human review.
When does the pilot launch?
LG AI Research has confirmed a pilot service for later in 2026 but has not disclosed a specific date.
What does this replace?
South Korea's current system uses keyword matching to sort reports, failing on misspellings and vague descriptions. The new platform applies semantic classification to handle the cases the old system misses.
About the Author
Guilherme A.
Former dentist (MD) from Brazil, 41 years old, husband, and AI enthusiast. In 2020, he transitioned from a decade-long career in dentistry to pursue his passion for technology, entrepreneurship, and helping others grow.
Connect on LinkedIn