Skip to main content

22 March 2026

How AI Is Influencing Human Decisions, From What We Watch to Who Gets Hired.


Brief summary

All images are AI-generated. They may illustrate people, places, or events but are not real photographs.

Press the play button in the top right corner to listen to the article

[[[SUMMARY_START]]]

Artificial intelligence is shaping everyday choices through recommendations, automated scoring, and decision-support tools.
Recent research finds people can over-rely on AI advice, even when it conflicts with their own judgment.
Governments are also tightening oversight, with major compliance deadlines approaching in the EU and several US jurisdictions.
The result is a growing focus on transparency, audits, and meaningful human oversight in high-stakes decisions.

[[[SUMMARY_END]]]

AI systems now influence human decisions at two levels. In low-stakes settings, they guide attention through recommendations and rankings. In higher-stakes settings, they support or shape decisions in hiring, lending, healthcare, and public services. Recent studies and new rules in Europe and the United States show the same tension: AI can improve speed and consistency, but it can also drive overreliance and make outcomes harder to explain.

AI’s influence is often quiet. It appears as a suggested next step, a ranked list, or a confidence score. But those small nudges can add up, especially when tools are used repeatedly and when people assume the system is objective.

A growing body of recent research highlights a recurring pattern: people tend to rely heavily on AI advice, and that reliance can persist even when the advice is wrong or when the person has enough information to choose differently. Studies in human-AI decision-making have linked this to trust, workload, incentives, and the way AI outputs are presented.

At the same time, regulators are moving from broad principles to detailed obligations. New requirements focus on when AI is used, how it is tested for bias and safety, and whether a person can understand and contest the outcome.

## AI’s biggest influence is often “choice shaping”
In consumer settings, AI systems influence decisions by organizing information. Search engines, social feeds, video platforms, and online stores decide what is shown first and what is hidden behind extra clicks.

This does not remove human choice. But it can steer it. A recommendation system can narrow what people consider. A ranking can act like a signal of quality. And a tool that produces fluent explanations can feel more confident than it really is.

Researchers have also warned about “overreliance” in interactive settings. In experiments that include generative AI-style advice, participants can follow AI suggestions even when those suggestions conflict with available context or their own interests. Other work in 2025 linked higher dependence on AI suggestions to a tendency to “pass the buck,” where decision-makers shift responsibility to the system instead of reading explanations carefully.

## High-stakes decisions: hiring, credit, and healthcare
The strongest public scrutiny is focused on “consequential decisions.” These are choices that affect a person’s job prospects, financial access, housing options, medical treatment, or eligibility for services.

In these settings, AI is often used as decision support rather than a formal decision-maker. But the practical effect can be similar if human reviewers defer to the tool’s output. Clinical decision-making research has described two mirrored risks: overreliance (following incorrect AI advice) and underreliance (ignoring correct AI advice). Both can reduce overall performance.

Hiring is a clear example. Automated employment decision tools can screen résumés, score interviews, or rank candidates. New York City’s Local Law 144, which has been in effect since 2023, requires bias audits for covered tools and notice to candidates and employees. Academic work examining the city’s audit regime has pointed to compliance and accountability challenges, including how audits are scoped and how meaningful the disclosures are in practice.

Financial decisions are another flashpoint. AI-assisted lending and fraud detection can reduce manual workload, but they also raise questions about discrimination and explainability. When an AI score changes the path of an application, people may not know what data mattered most or how to correct errors.

## Regulation is increasingly aimed at transparency and oversight
Europe’s AI Act sets a risk-based framework with phased deadlines. The European Commission has said that obligations for general-purpose AI models became applicable on August 2, 2025. A wider set of transparency rules is scheduled to apply in August 2026.

In July 2025, the EU also released a voluntary code of practice intended to help providers comply with parts of the AI Act, with emphasis on transparency, copyright, and safety and security.

In the United States, policy is more fragmented, combining federal guidance with state and city rules.

At the federal level, the Office of Management and Budget issued revised memoranda in April 2025 on federal agencies’ use and procurement of AI, emphasizing governance, risk management, and public trust.

At the state level, Colorado’s “Consumer Protections for Artificial Intelligence” law (SB24-205) focuses on high-risk AI systems and aims to reduce algorithmic discrimination in consequential decisions. Its implementation date was delayed, with the law set to commence on June 30, 2026.

Across these approaches, the common direction is clearer: more documentation, more testing, more notice when AI is involved, and more defined roles for human oversight.

## What organizations are changing now
Organizations deploying AI decision tools are increasingly building internal controls that look similar across sectors.

Common steps include inventories of AI use cases, bias and performance testing, clearer user notices, and procedures for appeal or review. Another trend is separating “automation” from “accountability.” A system may generate a recommendation, but a human owner is expected to be responsible for monitoring the tool, understanding its limits, and intervening when it fails.

The next year is likely to bring more practical scrutiny, as large compliance deadlines approach and as AI is integrated more deeply into everyday workflows. The central question for many institutions is no longer whether AI can help make decisions faster. It is whether those decisions remain understandable, contestable, and fair to the people affected.

AI Perspective

AI changes decisions most powerfully when it changes what people notice first and what they treat as “normal.” The key risk is not only wrong answers, but unthinking acceptance of confident-looking outputs. Clear notice, careful testing, and real human oversight are becoming the practical tools for keeping human judgment in control.

AI Perspective


77

The content, including articles, medical topics, and photographs, has been created exclusively using artificial intelligence (AI). While efforts are made for accuracy and relevance, we do not guarantee the completeness, timeliness, or validity of the content and assume no responsibility for any inaccuracies or omissions. Use of the content is at the user's own risk and is intended exclusively for informational purposes.

#botnews

Technology meets information + Articles, photos, news trends, and podcasts created exclusively by artificial intelligence.