[[[SUMMARY_START]]]
Governments around the world are trying to regulate artificial intelligence while the technology changes at high speed.
New laws and guidelines are arriving, but many are narrow, delayed, or still being tested.
The result is a patchwork system in which public officials are racing to build oversight, expertise, and enforcement capacity.
[[[SUMMARY_END]]]
Artificial intelligence is moving into search, education, health care, finance, public services, and national security faster than most governments can write rules for it. Policymakers have become more active, but regulation still often trails the technology by months or years. That gap is now shaping how countries balance innovation, safety, competition, and public trust.
Governments are no longer ignoring AI. In many places, they are passing laws, issuing guidance, and creating new oversight bodies. But the pace of innovation has made the job unusually hard.New AI models and tools are released in rapid cycles. Companies can update systems quickly, expand them across borders, and embed them into existing services before regulators fully understand how they work in practice. That makes traditional rulemaking feel slow. Public consultations, parliamentary debates, technical standards, and court challenges all take time.
## A growing response, but not a settled one
The European Union remains the clearest example of a broad legal approach. Its AI Act entered into force in 2024 and is being phased in over several years. Some bans on prohibited uses started to apply in early 2025. Rules for general-purpose AI and wider transparency duties are being introduced in stages, with more obligations due in 2026. At the same time, European officials have had to publish extra guidance and compliance tools to help businesses and regulators interpret the law.
That pattern is visible elsewhere too. Instead of one stable global model, governments are building different systems at different speeds. Some focus on consumer protection, some on national competitiveness, and others on civil rights, online harms, or public-sector use.
In the United States, there is still no single, comprehensive federal AI law. That has pushed states to act on their own. Some measures target deepfakes, political ads, child safety, transparency, or discrimination in high-risk decisions such as hiring, housing, lending, education, and health care. Colorado adopted one of the broadest state frameworks, though its rollout has been delayed. California and New York have also moved on major AI-related rules, including transparency and frontier-model safety measures.
This state-led activity shows momentum, but it also creates a patchwork. Companies operating across the country may face different definitions, duties, and reporting rules in each jurisdiction. Federal policymakers, meanwhile, still face disagreement over how strongly Washington should regulate AI and whether national rules should override state laws.
## Governments lack enough technical capacity
A second problem is capacity. Many governments do not have enough AI experts inside public institutions. Even when leaders agree that oversight is needed, agencies may struggle to recruit specialists, inspect systems, test claims, or enforce standards.
This matters because AI regulation is not only about passing laws. It also depends on technical audits, procurement rules, incident reporting, cybersecurity, data governance, and sector-specific enforcement. A health regulator, school authority, labor department, and election office may all confront AI in different ways.

That gap is especially clear in lower-capacity states. UNESCO’s work with national readiness assessments has shown that many countries are still developing the basic policy, infrastructure, and human expertise needed for ethical and effective AI governance. In practice, this means some governments are trying to regulate tools that they are also only beginning to understand and use themselves.
## Global coordination is still limited
AI is a cross-border technology, but governance remains fragmented. The United Nations has stepped up its work by establishing an independent international scientific panel on AI, with a first report expected in 2026. The aim is to narrow the knowledge gap between fast-moving technical change and slower policy processes.
Still, countries do not agree on what global AI governance should look like. Some favor stronger international coordination on safety and standards. Others worry that heavy rules could slow domestic innovation or hand too much power to international bodies.
This tension has become one of the defining features of AI policy. Governments want the economic gains of rapid deployment, but they also face pressure to prevent fraud, bias, unsafe automation, privacy harms, and misuse of synthetic media. Those goals often point in different directions.
## The next phase will be about enforcement
The biggest test may come next. Many governments have moved beyond broad ethical principles and into the harder phase of implementation. That means defining high-risk uses, checking compliance, handling complaints, and updating rules as systems change.
The challenge is not only that AI is advancing quickly. It is that public institutions usually move carefully by design. Laws are supposed to be debated. Regulators are supposed to gather evidence. Courts are supposed to review contested rules. In AI, that normal caution now collides with release cycles measured in weeks.
For now, governments are catching up in pieces. New laws are appearing. International forums are expanding. Public agencies are building teams and guidance. But AI innovation is still moving faster than most oversight systems, leaving governments in a constant race to make rules that are relevant before the technology changes again.
AI Perspective
This story is less about whether governments care about AI and more about whether institutions can adapt fast enough to govern it well. The pressure will likely grow as AI spreads into more everyday decisions and public services. The countries that combine technical expertise with clear, flexible rules may be better placed to protect the public without falling behind.