国产av日韩一区二区三区精品,成人性爱视频在线观看,国产,欧美,日韩,一区,www.成色av久久成人,2222eeee成人天堂

Home Technology peripherals AI US AI Policy Pivots Sharply From 'Safety' To 'Security'

US AI Policy Pivots Sharply From 'Safety' To 'Security'

Apr 12, 2025 am 11:15 AM

US AI Policy Pivots Sharply From ‘Safety’ To ‘Security’

President Donald Trump rescinded former President Joe Biden’s AI Executive Order on day one of his term (disclosure: I served as senior counselor for AI at the Department of Homeland Security during the Biden administration), and Vice President JD Vance opened up the Paris AI Action Summit, a convening that was originally launched to advance the field of AI safety, by firmly stating that he was not actually there to discuss AI safety and would instead be addressing “AI opportunity.” Vance went on to say that the U.S. would “safeguard American AI” and stop adversaries from attaining AI capabilities that “threaten all of our people.”

Without more context, these sound like meaningless buzzwords — what’s the difference between AI safety and AI security, and what does this shift mean for the consumers and businesses that continue to adopt AI?

Simply put, AI safety is primarily focused on developing AI in a way that behaves ethically and reliably, especially when it’s used in high-stakes contexts, like hiring or healthcare. To help prevent AI systems from causing harm, AI safety legislation typically includes risk assessments, testing protocols and requirements for human oversight.

AI security, by contrast, does not fixate on developing ethical and safe AI. Rather, it assumes that America’s adversaries will inevitably use AI in malicious ways and seeks to defend U.S. assets from intentional threats, like AI being exploited by rival nations to target U.S. critical infrastructure. These are not hypothetical risks — U.S. intelligence agencies continue to track growing offensive cyber operations in China, Russia and North Korea. To counter these types of deliberate attacks, organizations need a strong baseline of cybersecurity practices that also account for threats presented by AI.

Both of these fields are important and interconnected — so why does it seem like one has eclipsed the other in recent months? I would guess that prioritizing AI security is inherently more aligned with today’s foreign policy climate, in which the worldviews most in vogue are realist depictions of ruthless competition among nations for geopolitical and economic advantage. Prioritizing AI security aims to protect America from its adversaries while maintaining America’s global dominance in AI. AI safety, on the other hand, can be a lightning rod for political debates about free speech and unfair bias. The question of whether a given AI system will cause actual harm is also context dependent, as the same system deployed in different environments could produce vastly different outcomes.

In the face of so much uncertainty, combined with political disagreements about what truly constitutes harm to the public, legislators have struggled to justify passing safety legislation that could hamper America’s competitive edge. News of DeepSeek, a Chinese AI company, achieving competitive performance with U.S. AI models at substantially lower costs, only reaffirmed this move, stoking widespread fear about the steadily diminishing gap between U.S. and China AI capabilities.

What happens now, when the specter of federal safety legislation no longer looms on the horizon? Public comments from OpenAI, Anthropic and others on the Trump administration’s forthcoming “AI Action Plan” provide an interesting picture of how AI priorities have shifted. For one, “safety” hardly appears in the submissions from industry, and where safety issues are mentioned, they are reframed as national security risks that could disadvantage the U.S. in its race to out-compete China. In general, these submissions lay out a series of innovation-friendly policies, from balanced copyright rules for AI training to export controls on semiconductors and other valuable AI components (e.g. model weights).

Beyond trying to meet the spirit of the Trump administration’s initial messaging on AI, these submissions also seem to reveal what companies believe the role of the U.S. government should be when it comes to AI: funding infrastructure critical to further AI development, protecting American IP, and regulating AI only to the extent that it threatens our national security. To me, this is less of a strategy shift on the part of AI companies than it is a communications shift. If anything, these comments from industry seem more mission-aligned than their previous calls for strong and comprehensive data legislation.

Even then, not everyone in the industry supports a no-holds-barred approach to U.S. AI dominance. In their paper, “Superintelligence Strategy,” three prominent AI voices, Eric Schmidt, Dan Hendrycks and Alexandr Wang, advise caution when it comes to pursuing a Manhattan project-style push for developing superintelligent AI. The authors instead propose “Mutual Assured AI Malfunction,” or MAIM, a defensive strategy reminiscent of Cold War-era deterrence that would that forcefully counter any state-led efforts to achieve an AI monopoly.

If the United States were to pursue this strategy, it would need to disable threatening AI projects, restrict access to advanced AI chips and open weight models and strengthen domestic chip manufacturing. Doing so, according to the authors, would enable the U.S. and other countries to peacefully advance AI innovation while lowering the overall risk of rogue actors using AI to create widespread damage.

It will be interesting to see whether these proposals gain traction in the coming months as the Trump administration forms a more detailed position on AI. We should expect to see more such proposals — specifically, those that persistently focus on the geopolitical risks and opportunities of AI, only suggesting legislation to the extent that it helps prevent large-scale catastrophes, such as the creation of biological weapons or foreign attacks on critical U.S. assets.

Unfortunately, safety issues don’t disappear when you stop paying attention to them or rename a safety institute. While strengthening our security posture may help to boost our competitive edge and counter foreign attacks, it’s the safety interventions that help prevent harm to individuals or society at scale.

The reality is that AI safety and security work hand-in-hand — AI safety interventions don’t work if the systems themselves can be hacked; by the same token, securing AI systems against external threats becomes meaningless if those systems are inherently unsafe and prone to causing harm. Cambridge Analytica offers a useful illustration of this relationship; the incident revealed that Facebook’s inadequate safety protocols around data access served to exacerbate security vulnerabilities that were then exploited for political manipulation. Today’s AI systems face similarly interconnected challenges. When safety guardrails are dismantled, security risks inevitably follow.

For now, AI safety is in the hands of state legislatures and corporate trust and safety teams. The companies building AI know — perhaps better than anyone else — what the stakes are. A single breach of trust, whether it’s data theft or an accident, can be destructive to their brand. I predict that they will therefore continue to invest in sensible AI safety practices, but discreetly and without fanfare. Emerging initiatives like ROOST, which enables companies to collaboratively build open safety tools, may be a good preview of what’s to come: a quietly burgeoning AI safety movement, supported by the experts, labs and institutions that have pioneered this field over the past decade.

Hopefully, that will be enough.

The above is the detailed content of US AI Policy Pivots Sharply From 'Safety' To 'Security'. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undress AI Tool

Undress AI Tool

Undress images for free

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Top 7 NotebookLM Alternatives Top 7 NotebookLM Alternatives Jun 17, 2025 pm 04:32 PM

Google’s NotebookLM is a smart AI note-taking tool powered by Gemini 2.5, which excels at summarizing documents. However, it still has limitations in tool use, like source caps, cloud dependence, and the recent “Discover” feature

From Adoption To Advantage: 10 Trends Shaping Enterprise LLMs In 2025 From Adoption To Advantage: 10 Trends Shaping Enterprise LLMs In 2025 Jun 20, 2025 am 11:13 AM

Here are ten compelling trends reshaping the enterprise AI landscape.Rising Financial Commitment to LLMsOrganizations are significantly increasing their investments in LLMs, with 72% expecting their spending to rise this year. Currently, nearly 40% a

AI Investor Stuck At A Standstill? 3 Strategic Paths To Buy, Build, Or Partner With AI Vendors AI Investor Stuck At A Standstill? 3 Strategic Paths To Buy, Build, Or Partner With AI Vendors Jul 02, 2025 am 11:13 AM

Investing is booming, but capital alone isn’t enough. With valuations rising and distinctiveness fading, investors in AI-focused venture funds must make a key decision: Buy, build, or partner to gain an edge? Here’s how to evaluate each option—and pr

The Unstoppable Growth Of Generative AI (AI Outlook Part 1) The Unstoppable Growth Of Generative AI (AI Outlook Part 1) Jun 21, 2025 am 11:11 AM

Disclosure: My company, Tirias Research, has consulted for IBM, Nvidia, and other companies mentioned in this article.Growth driversThe surge in generative AI adoption was more dramatic than even the most optimistic projections could predict. Then, a

New Gallup Report: AI Culture Readiness Demands New Mindsets New Gallup Report: AI Culture Readiness Demands New Mindsets Jun 19, 2025 am 11:16 AM

The gap between widespread adoption and emotional preparedness reveals something essential about how humans are engaging with their growing array of digital companions. We are entering a phase of coexistence where algorithms weave into our daily live

These Startups Are Helping Businesses Show Up In AI Search Summaries These Startups Are Helping Businesses Show Up In AI Search Summaries Jun 20, 2025 am 11:16 AM

Those days are numbered, thanks to AI. Search traffic for businesses like travel site Kayak and edtech company Chegg is declining, partly because 60% of searches on sites like Google aren’t resulting in users clicking any links, according to one stud

AGI And AI Superintelligence Are Going To Sharply Hit The Human Ceiling Assumption Barrier AGI And AI Superintelligence Are Going To Sharply Hit The Human Ceiling Assumption Barrier Jul 04, 2025 am 11:10 AM

Let’s talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). Heading Toward AGI And

Cisco Charts Its Agentic AI Journey At Cisco Live U.S. 2025 Cisco Charts Its Agentic AI Journey At Cisco Live U.S. 2025 Jun 19, 2025 am 11:10 AM

Let’s take a closer look at what I found most significant — and how Cisco might build upon its current efforts to further realize its ambitions.(Note: Cisco is an advisory client of my firm, Moor Insights & Strategy.)Focusing On Agentic AI And Cu

See all articles