国产av日韩一区二区三区精品,成人性爱视频在线观看,国产,欧美,日韩,一区,www.成色av久久成人,2222eeee成人天堂

Table of Contents
AI Shadow Breeding Ground
Go-Around Guardrails
What are opaque software dependencies?
The Road To Platformization
From Out Of The Shadows
Home Technology peripherals AI Shining A Light On Shadow AI

Shining A Light On Shadow AI

Jul 08, 2025 am 11:17 AM

Shining A Light On Shadow AI

But mostly, unsurprisingly, shadow AI (like most forms of shadow technology and bring your own device activity) is viewed as a negative, an infringement and a risk.

AI Shadow Breeding Ground

The issue today is that AI remains essentially in its infancy, still embryonic and only beginning to experience its first wave of implementation. Many users' exposure to AI is limited to amusing image creations generated by ChatGPT and other tools (think about human plastic toy blister packs last week, cats on diving boards this week, and something even more bizarre next week for sure), meaning widespread enterprise adoption of AI tools has yet to become standard practice. Although that time seems imminent, the current state of AI development means some usage is slipping under the radar.

The unauthorized use of AI tools by developers is becoming a serious problem as application development continues to accelerate rapidly. Scott McKinnon, CSO for UK&I at Palo Alto Networks, says this means building modern, cloud-native applications isn't just about writing code anymore—it's realizing we’re now operating in a “continuous beta mode” due to the pressure to deploy new enterprise software services quickly.

“The resulting effect is that developers face intense pressure to deliver fast and reduce time to market. Given this, it’s understandable why many developers are turning to AI tools to boost efficiency and meet these demanding expectations,” said McKinnon. “Our research shows enterprise generative AI traffic surged by over 890% in 2024—and with organizations increasingly using these apps—some can be considered high-risk. Meanwhile, data loss prevention incidents linked to generative AI have more than doubled, clearly signaling governance failures.

Go-Around Guardrails

When all these realities are combined, it’s easy to see why software developers might be tempted to bypass the organization’s AI guardrail policies and controls. In practice, they may plug into open-source large language models outside approved platforms, generate code using AI without oversight, or skip data governance policies to speed up deployment. The result could expose intellectual property through compliance breaches that also compromise system security.

“It all comes down to one thing: if developers are to balance speed with security, they must adopt a new operational model. This model should embed clear, enforceable AI governance and oversight directly into the continuous delivery pipeline rather than tacking them on afterward,” explained McKinnon. “When developers use AI tools outside sanctioned channels, one major concern is supply chain integrity. Pulling in untested or unvetted AI components introduces opaque dependencies often hiding vulnerabilities.”

What are opaque software dependencies?

It’s a term that sounds ominous enough already, and opaque software dependencies truly are problematic. Software dependencies are essential parts of smaller

data services—software libraries handling database connections, frameworks managing user interfaces, or modules forming part of external third-party applications. Useful software dependencies are transparent and easy to inspect; opaque ones, while functional, are murky when it comes to revealing their origins and internal components. Technically speaking, opaque software application dependencies mean developers cannot "assign" them (and establish a connection to them) via a public application programming interface.

According to McKinnon, another significant threat is prompt injection attacks, where malicious actors manipulate AI inputs to force unintended and dangerous behaviors. These types of vulnerabilities are hard to detect and can erode trust and safety in AI-powered applications. When unchecked, such practices create new attack surfaces and increase the overall risk of cyber incidents. Organizations need to get ahead of this by securing AI development environments, rigorously vetting tools, and ensuring developers have the support needed to work effectively.

The Road To Platformization

"To effectively tackle the risks from unsanctioned AI use, organizations need to move beyond fragmented tools and processes toward a unified platform approach. This means consolidating AI governance, system controls, and developer workflows into a single integrated system offering real-time visibility. Without this, organizations struggle to keep pace with the speed and scale of modern development environments, leaving gaps adversaries can exploit,” said McKinnon.

His vision of platformization (and broader platform engineering) enables organizations to apply consistent policies across all AI usage, identify risky behavior early, and provide developers with secure, approved AI capabilities within existing workflows.

“This reduces friction for software developers, enabling faster work without compromising security or compliance. Instead of juggling multiple disconnected tools, organizations gain a centralized view of AI activity, making monitoring, auditing, and responding to threats easier. Ultimately, a platform approach is about balance—delivering safeguards and controls to minimize risk while preserving the agility and innovation developers require,” concluded Palo Alto Networks’ McKinnon.

At its worst, shadow AI can lead to so-called model poisoning (also known as data poisoning), a scenario described by application and API reliability company Cloudflare as when an attacker manipulates the outputs of an AI or machine learning model by altering its training data. The goal of an AI model poisoner is to make the AI produce biased or harmful results once it starts processing inference calculations that ultimately provide us with AI-driven insights.

According to Mitchell Johnson, chief product officer at software supply chain management specialist Sonatype, “Shadow AI includes any AI application or tool used outside an organization’s IT or governance frameworks. Think shadow IT but with far greater potential (and risk). It’s the digital equivalent of prospectors staking claims during a gold rush, cutting through red tape to strike gold in efficiency and innovation. Examples include employees using ChatGPT to draft proposals, leveraging new AI-powered code assistants, building machine learning models on personal accounts, or automating repetitive tasks with unofficial scripts.”

Johnson notes that it appears increasingly now due to the rise in remote working, where teams operate outside traditional oversight and firms lack comprehensive AI governance, creating policy gaps that allow room for improvisation.

From Out Of The Shadows

There is clearly a network system health concern tied to shadow AI; after all, it’s the first issue raised by tech industry commentators warning about any form of shadow IT. There are wider implications too, including certain IT teams gaining what might seem like an unfair advantage, or some developer teams introducing rogue AI that causes bias and hallucinations.

To borrow a meteorological truism, shadows are usually good news only during a heatwave… which typically means there’s a lot of humidity present, with storms potentially on the way.

The above is the detailed content of Shining A Light On Shadow AI. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undress AI Tool

Undress AI Tool

Undress images for free

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Top 7 NotebookLM Alternatives Top 7 NotebookLM Alternatives Jun 17, 2025 pm 04:32 PM

Google’s NotebookLM is a smart AI note-taking tool powered by Gemini 2.5, which excels at summarizing documents. However, it still has limitations in tool use, like source caps, cloud dependence, and the recent “Discover” feature

Hollywood Sues AI Firm For Copying Characters With No License Hollywood Sues AI Firm For Copying Characters With No License Jun 14, 2025 am 11:16 AM

But what’s at stake here isn’t just retroactive damages or royalty reimbursements. According to Yelena Ambartsumian, an AI governance and IP lawyer and founder of Ambart Law PLLC, the real concern is forward-looking.“I think Disney and Universal’s ma

What Does AI Fluency Look Like In Your Company? What Does AI Fluency Look Like In Your Company? Jun 14, 2025 am 11:24 AM

Using AI is not the same as using it well. Many founders have discovered this through experience. What begins as a time-saving experiment often ends up creating more work. Teams end up spending hours revising AI-generated content or verifying outputs

From Adoption To Advantage: 10 Trends Shaping Enterprise LLMs In 2025 From Adoption To Advantage: 10 Trends Shaping Enterprise LLMs In 2025 Jun 20, 2025 am 11:13 AM

Here are ten compelling trends reshaping the enterprise AI landscape.Rising Financial Commitment to LLMsOrganizations are significantly increasing their investments in LLMs, with 72% expecting their spending to rise this year. Currently, nearly 40% a

The Prototype: Space Company Voyager's Stock Soars On IPO The Prototype: Space Company Voyager's Stock Soars On IPO Jun 14, 2025 am 11:14 AM

Space company Voyager Technologies raised close to $383 million during its IPO on Wednesday, with shares offered at $31. The firm provides a range of space-related services to both government and commercial clients, including activities aboard the In

Boston Dynamics And Unitree Are Innovating Four-Legged Robots Rapidly Boston Dynamics And Unitree Are Innovating Four-Legged Robots Rapidly Jun 14, 2025 am 11:21 AM

I have, of course, been closely following Boston Dynamics, which is located nearby. However, on the global stage, another robotics company is rising as a formidable presence. Their four-legged robots are already being deployed in the real world, and

What Is 'Physical AI'? Inside The Push To Make AI Understand The Real World What Is 'Physical AI'? Inside The Push To Make AI Understand The Real World Jun 14, 2025 am 11:23 AM

Add to this reality the fact that AI largely remains a black box and engineers still struggle to explain why models behave unpredictably or how to fix them, and you might start to grasp the major challenge facing the industry today.But that’s where a

Nvidia Wants To Build A Planet-Scale AI Factory With DGX Cloud Lepton Nvidia Wants To Build A Planet-Scale AI Factory With DGX Cloud Lepton Jun 14, 2025 am 11:17 AM

Nvidia has rebranded Lepton AI as DGX Cloud Lepton and reintroduced it in June 2025. As stated by Nvidia, the service offers a unified AI platform and compute marketplace that links developers to tens of thousands of GPUs from a global network of clo

See all articles