Close Menu
AsiaTokenFundAsiaTokenFund
  • Home
  • Crypto News
    • Bitcoin
    • Altcoin
  • Web3
    • Blockchain
  • Trading
  • Regulations
    • Scams
  • Submit Article
  • Contact Us
  • Terms of Use
    • Privacy Policy
    • DMCA
What's Hot

Polymarket Insider Trading Rules 2026: What the New CFTC-Backed Regulations Actually Mean

March 23, 2026

XRP’s Biggest Single-Candle Move in Weeks Was Built on a Story Iran Says Never Happened

March 23, 2026

Bittensor (TAO) Price Gains Strength—Is a Breakout Above Resistance Imminent?

March 23, 2026
Facebook X (Twitter) Instagram
Facebook X (Twitter) YouTube LinkedIn
AsiaTokenFundAsiaTokenFund
ATF Capital
  • Home
  • Crypto News
    • Bitcoin
    • Altcoin
  • Web3
    • Blockchain
  • Trading
  • Regulations
    • Scams
  • Submit Article
  • Contact Us
  • Terms of Use
    • Privacy Policy
    • DMCA
AsiaTokenFundAsiaTokenFund

NVIDIA OpenShell Brings Security Sandbox to Autonomous AI Agents

0
By Aggregated - see source on March 23, 2026 Blockchain
Share
Facebook Twitter LinkedIn Pinterest Email


Terrill Dicki
Mar 23, 2026 15:45

NVIDIA’s new open-source OpenShell runtime creates isolated sandboxes for AI agents, partnering with Cisco, CrowdStrike, and Microsoft on enterprise security.





NVIDIA has released OpenShell, an open-source runtime designed to lock down autonomous AI agents through kernel-level isolation and policy enforcement. The Apache 2.0-licensed tool addresses a growing problem: AI agents that can read files, execute code, and modify systems also represent significant security liabilities.

The core innovation here is separating what an agent wants to do from what it’s allowed to do. OpenShell sits between the AI and the operating system, using Linux Landlock LSM to create sandboxed environments where agents operate under strict constraints they cannot override—even if compromised.

How It Actually Works

Think of it like browser tabs for AI agents. Each agent runs in its own isolated session with controlled resources and verified permissions. Security policies are defined in YAML or JSON files at the system level, governing access down to specific binaries, network endpoints, and file paths.

The runtime also intercepts model API calls, letting organizations route inference traffic to private backends without touching the agent’s code. This handles both security and cost control in one layer.

What makes OpenShell practical for enterprise adoption: it’s agent-agnostic. It works with Claude Code, OpenAI’s Codex, and Cursor out of the box. No SDK rewrites required.

The Partner Ecosystem

NVIDIA isn’t going solo on this. The company has lined up Cisco, CrowdStrike, Google Cloud, Microsoft Security, and TrendAI to align runtime policy management across enterprise stacks. That’s a serious coalition for what’s essentially infrastructure-level AI governance.

Alongside OpenShell, NVIDIA released NemoClaw—a reference stack for building personal AI assistants that bundles OpenShell with Nemotron models. It runs on everything from GeForce RTX laptops to DGX Station supercomputers, giving developers a template for self-evolving agents with customizable security guardrails.

Why This Matters Now

Autonomous agents represent a genuine inflection point in enterprise AI risk. These systems don’t just generate text—they execute workflows, write code, and continuously improve their own capabilities. Traditional prompt-based safety measures fall apart when agents can potentially override them.

OpenShell’s approach of enforcing constraints at the infrastructure layer rather than the application layer addresses this directly. The agent literally cannot leak credentials or access restricted files because the sandbox prevents it, regardless of what the model tries to do.

Both OpenShell and NemoClaw remain in early preview. Developers can access ready-to-use environments on NVIDIA Brev or grab the code from GitHub. For enterprises scaling autonomous AI deployments, this represents the first serious attempt at standardized security controls—though real-world testing will determine whether the sandbox holds up under adversarial conditions.

Image source: Shutterstock


Credit: Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

Core Scientific CORZ Hits $1B Financing After JPMorgan Adds $500M

March 23, 2026

OP Price Prediction: Critical Support at $0.11 Could Determine March Direction

March 23, 2026

HTX Report Maps AI-Crypto Fusion as Agents Choose BTC Over Fiat 90% of Time

March 23, 2026
Leave A Reply Cancel Reply

What's New Here!

Polymarket Insider Trading Rules 2026: What the New CFTC-Backed Regulations Actually Mean

March 23, 2026

XRP’s Biggest Single-Candle Move in Weeks Was Built on a Story Iran Says Never Happened

March 23, 2026

Bittensor (TAO) Price Gains Strength—Is a Breakout Above Resistance Imminent?

March 23, 2026

What Happens to XRP Price When Ripple Runs Out of Escrow Tokens? Analysts Break It Down

March 23, 2026
AsiaTokenFund
Facebook X (Twitter) LinkedIn YouTube
  • Home
  • Crypto News
    • Bitcoin
    • Altcoin
  • Web3
    • Blockchain
  • Trading
  • Regulations
    • Scams
  • Submit Article
  • Contact Us
  • Terms of Use
    • Privacy Policy
    • DMCA
© 2026 asiatokenfund.com - All Rights Reserved!

Type above and press Enter to search. Press Esc to cancel.

Ad Blocker Enabled!
Ad Blocker Enabled!
Our website is made possible by displaying online advertisements to our visitors. Please support us by disabling your Ad Blocker.