Forgot your password?
typodupeerror
AI Government

South Korea Launches Landmark Laws To Regulate AI 7

An anonymous reader quotes a report from the Korea Herald: South Korea will begin enforcing its Artificial Intelligence Act on Thursday, becoming the first country to formally establish safety requirements for high-performance, or so-called frontier, AI systems -- a move that sets the country apart in the global regulatory landscape. According to the Ministry of Science and ICT, the new law is designed primarily to foster growth in the domestic AI sector, while also introducing baseline safeguards to address potential risks posed by increasingly powerful AI technologies. Officials described the inclusion of legal safety obligations for frontier AI as a world-first legislative step.

The act lays the groundwork for a national-level AI policy framework. It establishes a central decision-making body -- the Presidential Council on National Artificial Intelligence Strategy -- and creates a legal foundation for an AI Safety Institute that will oversee safety and trust-related assessments. The law also outlines wide-ranging support measures, including research and development, data infrastructure, talent training, startup assistance, and help with overseas expansion.

To reduce the initial burden on businesses, the government plans to implement a grace period of at least one year. During this time, it will not carry out fact-finding investigations or impose administrative sanctions. Instead, the focus will be on consultations and education. A dedicated AI Act support desk will help companies determine whether their systems fall within the law's scope and how to respond accordingly. Officials noted that the grace period may be extended depending on how international standards and market conditions evolve. The law applies to three areas only: high-impact AI, safety obligations for high-performance AI and transparency requirements for generative AI.

Enforcement under the Korean law is intentionally light. It does not impose criminal penalties. Instead, it prioritizes corrective orders for noncompliance, with fines -- capped at 30 million won ($20,300) -- issued only if those orders are ignored. This, the government says, reflects a compliance-oriented approach rather than a punitive one. Transparency obligations for generative AI largely align with those in the EU, but Korea applies them more narrowly. Content that could be mistaken for real, such as deepfake images, video or audio, must clearly disclose its AI-generated origin. For other types of AI-generated content, invisible labeling via metadata is allowed. Personal or noncommercial use of generative AI is excluded from regulation.
"This is not about boasting that we are the first in the world," said Kim Kyeong-man, deputy minister of the office of artificial intelligence policy at the ICT ministry. "We're approaching this from the most basic level of global consensus."

Korea's approach differs from the EU by defining "high-performance AI" using technical thresholds like cumulative training compute, rather than regulating based on how AI is used. As a result, Korea believes no current models meet the bar for regulation, while the EU is phasing in broader, use-based AI rules over several years.
This discussion has been archived. No new comments can be posted.

South Korea Launches Landmark Laws To Regulate AI

Comments Filter:
  • "Korea's approach differs from the EU by defining "high-performance AI" using technical thresholds like cumulative training compute, rather than regulating based on how AI is used. As a result, Korea believes no current models meet the bar for regulation"

    Call me a cynic; but I'm not sure I believe that people running even-bigger-than-the-ones-that-cost-not-necessarily-single-digit-billions-per-year models will be unduly perturbed by the possibility of maybe getting a $20k fine at some point in the coming
    • by allo ( 1728082 )

      The EU has numbers of FLOPs defined for when a model needs to be regulated.

      "GPAI models present systemic risks when the cumulative amount of compute used for its training is greater than 10^25 floating point operations (FLOPs). Providers must notify the Commission if their model meets this criterion within 2 weeks."

  • There doesn't seem to be anything in the article about locally run models, probably because law makers there realize such things cannot be regulated. However, I'm sure more idiotic countries (maybe the UK?) would try to regulate local use too; restricting stable diffusion models or try to force open source tools to submit all prompts to a government API before rendering. You might say that sounds dumb or crazy, but the UK is imprisoning people for years over facebook posts.

    As ram and storage prices conti
    • There doesn't seem to be anything in the article about locally run models, probably because law makers there realize such things cannot be regulated. However, I'm sure more idiotic countries (maybe the UK?) would try to regulate local use too; restricting stable diffusion models or try to force open source tools to submit all prompts to a government API before rendering. You might say that sounds dumb or crazy, but the UK is imprisoning people for years over facebook posts. As ram and storage prices continue to rise, it might get easier. Having a power local computer could become out of reach for everyone.

      I could see the US trying to regulate locally run models. Mostly because the broligarchy wouldn't much care for people running anything AI related that doesn't give them a direct hoover line for all the data, and our government is currently operating as a part of the broligarchy.

  • AI has already been weaponized worldwide, led by US/israeli efforts. Flimsy internal laws will have minimal effects. I foresee a point where countries will physically cut off international internet access because of this weaponization. We have already seen what uncheck power is doing around the world. AI just makes it so much easier for some (of the same) countries to terrorize others.

It is contrary to reasoning to say that there is a vacuum or space in which there is absolutely nothing. -- Descartes

Working...