[[[SUMMARY_START]]]
A growing pattern is emerging across schools, workplaces and software development: people can complete tasks with digital tools without fully understanding how the result was produced.
Generative AI has made this shift faster and more visible.
Recent surveys and studies show both benefits and risks, including higher productivity, weaker trust, and concern about critical thinking.
The central question is no longer whether people can use these tools, but whether they can judge them well.
[[[SUMMARY_END]]]
For years, technology has helped people do things they did not fully understand. Drivers follow GPS routes without reading maps. Office workers use spreadsheets without knowing the formulas behind every cell. Students search for answers before working through a problem by hand.
Generative AI has pushed this habit into a new phase. It can write, summarize, code, translate, design and explain. In many cases, a person can get a useful result with only a short prompt. That is making functional use easier than deep understanding.
The change is not only technical. It is cultural.
In many daily tasks, success is now measured by whether something works. The user may not know the method, the source of the answer, or the limits of the system. If the email is clear, the code runs, or the image looks right, the job may be treated as finished.
This has clear advantages. People can move faster. Small teams can do work that once required more staff. A student can ask for an explanation in plain language. A worker can turn a rough note into a polished memo. A programmer can test an idea before writing every line from scratch.
But the same shift can weaken the link between output and understanding. A person may know how to request a result, but not how to check whether it is sound.
## Evidence from work and education
Recent research on knowledge workers shows why this matters. A 2025 study by researchers from Microsoft and Carnegie Mellon surveyed 319 professionals and collected 936 examples of generative AI use at work. The study found that higher confidence in AI was linked with less critical thinking effort. Higher confidence in one’s own ability was linked with more critical thinking.
That finding points to a wider pattern. AI does not simply remove thinking. It changes where thinking happens. Instead of drafting from scratch, people may spend more time verifying, editing and deciding whether an answer fits the task.
Education faces a similar challenge. International education research in 2026 found that generative AI use had moved quickly into mainstream learning. More than one-third of people across OECD countries used generative AI tools in 2025, and use was especially high among students aged 16 and older.
That does not mean students are no longer learning. Many use AI for explanations, outlines, practice questions and feedback. The concern is that some may skip the harder steps that build durable knowledge, such as recalling information, solving a problem independently, or explaining the reasoning in their own words.
Public concern is also visible. A 2025 U.S. survey found that nearly three-quarters of adults considered AI literacy extremely or very important for the future. A separate survey of U.S. teens found that among those expecting AI to have a negative effect on society, the most common reason was overreliance or loss of critical thinking and creativity.

Software development gives one of the clearest examples of the new divide between use and understanding.
AI coding assistants can help developers produce code faster. They can suggest functions, explain errors and draft tests. But surveys show that use and trust are moving in different directions.
A 2025 developer survey with more than 49,000 respondents in 177 countries found that 84% of developers used or planned to use AI tools in their development process. At the same time, 46% said they did not trust the accuracy of AI tool output, up from 31% the year before.
This does not show rejection of AI. It shows a more practical relationship with it. Developers are using the tools, but many still want human review, especially when work involves security, ethics, deployment or complex systems.
The lesson is simple. Functional output is not the same as reliable output. Code that runs may still be insecure. A summary that reads well may leave out a key point. A chart may look correct while using weak data.
## The new skill is judgment
The debate is often framed as humans versus machines. The evidence points to a different issue. The most important skill may be knowing when a tool is useful and when it is not enough.
This requires basic understanding, even when full expertise is not possible. A person does not need to know every detail of a large AI model to use it. But they do need to know that it can make errors, reflect bias, miss context and produce confident answers without full reliability.
Schools and workplaces are starting to treat AI literacy as part of general literacy. That means teaching people how to question outputs, compare sources, protect private information, and explain their own reasoning.
Functional use is likely to keep growing. The challenge is to make sure it does not replace the habits that allow people to learn, evaluate and take responsibility for the results they accept.
AI Perspective
The rise of functional use is not automatically a loss. Tools can free people from routine work and make knowledge easier to reach. The risk comes when speed replaces checking, and when users stop asking why an answer is true.