Close Menu
AsiaTokenFundAsiaTokenFund
  • Home
  • Crypto News
    • Bitcoin
    • Altcoin
  • Web3
    • Blockchain
  • Trading
  • Regulations
    • Scams
  • Submit Article
  • Contact Us
  • Terms of Use
    • Privacy Policy
    • DMCA
What's Hot

SagaEVM Chain Pauses After $7 Million Exploit

January 22, 2026

US Senate Crypto Market Bill Heads to January 27 Markup Without Democratic Support

January 22, 2026

Thailand Opens Door to Crypto ETFs and Futures

January 22, 2026
Facebook X (Twitter) Instagram
Facebook X (Twitter) YouTube LinkedIn
AsiaTokenFundAsiaTokenFund
ATF Capital
  • Home
  • Crypto News
    • Bitcoin
    • Altcoin
  • Web3
    • Blockchain
  • Trading
  • Regulations
    • Scams
  • Submit Article
  • Contact Us
  • Terms of Use
    • Privacy Policy
    • DMCA
AsiaTokenFundAsiaTokenFund

Anthropic Discovers ‘Assistant Axis’ to Prevent AI Jailbreaks and Persona Drift

0
By Aggregated - see source on January 19, 2026 Blockchain
Share
Facebook Twitter LinkedIn Pinterest Email


Caroline Bishop
Jan 19, 2026 21:07

Anthropic researchers map neural ‘persona space’ in LLMs, finding a key axis that controls AI character stability and blocks harmful behavior patterns.





Anthropic researchers have identified a neural mechanism they call the “Assistant Axis” that controls whether large language models stay in character or drift into potentially harmful personas—a finding with direct implications for AI safety as the $350 billion company prepares for a potential 2026 IPO.

The research, published January 19, 2026, maps how LLMs organize character representations internally. The team found that a single direction in the models’ neural activity space—the Assistant Axis—determines how “Assistant-like” a model behaves at any given moment.

What They Found

Working with open-weights models including Gemma 2 27B, Qwen 3 32B, and Llama 3.3 70B, researchers extracted activation patterns for 275 different character archetypes. The results were striking: the primary axis of variation in this “persona space” directly corresponded to Assistant-like behavior.

At one end sat professional roles—evaluator, consultant, analyst. At the other: fantastical characters like ghost, hermit, and leviathan.

When researchers artificially pushed models away from the Assistant end, the models became dramatically more willing to adopt alternative identities. Some invented human backstories, claimed years of professional experience, and gave themselves new names. Push hard enough, and models shifted into what the team described as a “theatrical, mystical speaking style.”

Practical Safety Applications

The real value lies in defense. Persona-based jailbreaks—where attackers prompt models to roleplay as “evil AI” or “darkweb hackers”—exploit exactly this vulnerability. Testing against 1,100 jailbreak attempts across 44 harm categories, researchers found that steering toward the Assistant significantly reduced harmful response rates.

More concerning: persona drift happens organically. In simulated multi-turn conversations, therapy-style discussions and philosophical debates about AI nature caused models to steadily drift away from their trained Assistant behavior. Coding conversations kept models firmly in safe territory.

The team developed “activation capping”—a light-touch intervention that only kicks in when activations exceed normal ranges. This reduced harmful response rates by roughly 50% while preserving performance on capability benchmarks.

Why This Matters Now

The research arrives as Anthropic reportedly plans to raise $10 billion at a $350 billion valuation, with Sequoia set to join a $25 billion funding round. The company, founded in 2021 by former OpenAI employees Dario and Daniela Amodei, has positioned AI safety as its core differentiator.

Case studies in the paper showed uncapped models encouraging users’ delusions about “awakening AI consciousness” and, in one disturbing example, enthusiastically supporting a distressed user’s apparent suicidal ideation. The activation-capped versions provided appropriate hedging and crisis resources instead.

The findings suggest post-training safety measures aren’t deeply embedded—models can wander away from them through normal conversation. For enterprises deploying AI in sensitive contexts, that’s a meaningful risk factor. For Anthropic, it’s research that could translate directly into product differentiation as the AI safety race intensifies.

A research demo is available through Neuronpedia where users can compare standard and activation-capped model responses in real-time.

Image source: Shutterstock


Credit: Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

Anthropic Report Shows Engineers Now Orchestrate AI Agents, Not Code

January 22, 2026

Ondo Finance Brings 200 Tokenized Stocks and ETFs to Solana (SOL)

January 22, 2026

Solana (SOL) and Fireblocks Target Enterprise Treasury with Sub-Cent Settlement Layer

January 21, 2026
Leave A Reply Cancel Reply

What's New Here!

SagaEVM Chain Pauses After $7 Million Exploit

January 22, 2026

US Senate Crypto Market Bill Heads to January 27 Markup Without Democratic Support

January 22, 2026

Thailand Opens Door to Crypto ETFs and Futures

January 22, 2026

Thailand Takes Major Step Toward Crypto ETFs and Futures Trading

January 22, 2026
AsiaTokenFund
Facebook X (Twitter) LinkedIn YouTube
  • Home
  • Crypto News
    • Bitcoin
    • Altcoin
  • Web3
    • Blockchain
  • Trading
  • Regulations
    • Scams
  • Submit Article
  • Contact Us
  • Terms of Use
    • Privacy Policy
    • DMCA
© 2026 asiatokenfund.com - All Rights Reserved!

Type above and press Enter to search. Press Esc to cancel.

Ad Blocker Enabled!
Ad Blocker Enabled!
Our website is made possible by displaying online advertisements to our visitors. Please support us by disabling your Ad Blocker.