Close Menu
AsiaTokenFundAsiaTokenFund
  • Home
  • Crypto News
    • Bitcoin
    • Altcoin
  • Web3
    • Blockchain
  • Trading
  • Regulations
    • Scams
  • Submit Article
  • Contact Us
  • Terms of Use
    • Privacy Policy
    • DMCA
What's Hot

Turkey blocks access to PancakeSwap, 45 crypto websites in regulatory crackdown

July 4, 2025

Ethereum Price Targets $3,000 As Analyst Calls It A ‘Powder Keg’

July 4, 2025

Can Bitcoin Bulls Withstand the Re-awakening of Satoshi-era Whales? 

July 4, 2025
Facebook X (Twitter) Instagram
Facebook X (Twitter) YouTube LinkedIn
AsiaTokenFundAsiaTokenFund
ATF Capital
  • Home
  • Crypto News
    • Bitcoin
    • Altcoin
  • Web3
    • Blockchain
  • Trading
  • Regulations
    • Scams
  • Submit Article
  • Contact Us
  • Terms of Use
    • Privacy Policy
    • DMCA
AsiaTokenFundAsiaTokenFund

New Bill Will “Watermark” AI Content to Fight Deepfake Scams

0
By Aggregated - see source on July 12, 2024 Scams
Share
Facebook Twitter LinkedIn Pinterest Email

Last updated:

July 12, 2024, 14:33 EDT

| 2 min read

New Bill Suggests “Watermarking” AI Content to Fight Deepfake Scams

A bipartisan group of senators introduced a new bill on July 11 to tackle deepfake scams, copyright infringement, and AI training on data it’s not supposed to.

The group announced the bill with a press release led by Democratic Party Senator Maria Cantwell, outlining several measures to regulate AI-generated content.

This tackles critical issues like protecting online creators’ intellectual property and controlling the types of content AI can train.

The Content Origin Protection and Integrity from Edited and Deepfaked Media Act (COPIED Act) calls for a standardized method for watermarking AI-generated content online.

AI service providers must embed AI-generated content with metadata disclosing its originality, which AI tools cannot remove or exclude.

Cantwell emphasized the unchecked nature of these issues amid AI’s rapid rise, stressing the bill’s role in providing “much-needed transparency.”

THURSDAY: The Senate Commerce Committee is holding a hearing on how AI is accelerating the need to protect Americans’ privacy.

Tune in on July 11 at 10am: https://t.co/czZoPO5wpo pic.twitter.com/Xtzku6uVdq

— Senate Commerce, Science, Transportation Committee (@commercedems) July 10, 2024

“The COPIED Act will also put creators, including local journalists, artists, and musicians, back in control of their content with a provenance and watermark process that I think is very much needed,” she added.

Crypto Deepfake Scams Thwarted


The crypto industry stands to benefit the most from the bill as Deepfake scams remain one of the main perpetrators of crypto crimes.

Deepfakes exploit the likeness of influential figures and celebrities to promote fraudulent investment schemes.

They falsely imply that the project has legitimate or official backing – thereby legitimizing it among potential victims.

Recently, over 35 YouTube channels live-streamed the Space X launch using an AI-generated voice and Deepfake to impersonate Elon Musk.

An issue that was only expected to escalate, cited to account for over 70% of all crypto crimes within the next 2 years.

Therefore, this bill is a monumental step towards thwarting these efforts by clearly distinguishing AI-generated deceptive material.

AI Has Been Leading a New Wave Of Crypto Crime


Although deepfakes remain the most prominent application of AI technology in crypto crime, it has a range of applications.

A recent Elliptic report exposed the rise of AI crypto crimes, marking a new era of cyber threats exploited for deepfake scams, state-sponsored attacks, and other sophisticated illicit activities.

AI has driven beneficial innovation in many industries, including the AI cryptoasset sector. This innovation has birthed many projects poised to redefine the landscape of AI crypto.

As with any emerging technology, there is always a risk of bad actors seeking to exploit new developments for illicit purposes.

Dark web forums explore large language models (LLMs) for crypto-related crimes – exploiting the power of AI to facilitate other crimes. This includes reverse-engineering wallet seed phrases and automating scams like phishing and malware deployment.

Dark web markets offer “unethical” versions of GPTs designed for AI crypto crime. These tools aim to evade detection by legitimate GPTs.

WormGPT, the self-described “enemy of ChatGPT,” was noted in the report. It introduces itself as a tool that “transcends the boundaries of legality.” It openly advertises itself for facilitating the creation of phishing emails, carding, malware, and generating malicious code.

As a result, Elliptic calls for a review of early warning signs of illegal activity to ensure long-term innovation and mitigate emerging risks in their early stages.



Credit: Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

Crypto firms paid $2.7M monthly to North Korean workers

July 2, 2025

Bybit and North Korean hackers headline $2.1 billion crypto hacks in H1

June 27, 2025

You’re Hired! North Korea’s new crypto scam starts with a job offer

June 20, 2025
Leave A Reply Cancel Reply

What's New Here!

Turkey blocks access to PancakeSwap, 45 crypto websites in regulatory crackdown

July 4, 2025

Ethereum Price Targets $3,000 As Analyst Calls It A ‘Powder Keg’

July 4, 2025

Can Bitcoin Bulls Withstand the Re-awakening of Satoshi-era Whales? 

July 4, 2025

Robinhood and Coinbase Expand Crypto Offerings Amid Regulatory Challenges

July 4, 2025
AsiaTokenFund
Facebook X (Twitter) LinkedIn YouTube
  • Home
  • Crypto News
    • Bitcoin
    • Altcoin
  • Web3
    • Blockchain
  • Trading
  • Regulations
    • Scams
  • Submit Article
  • Contact Us
  • Terms of Use
    • Privacy Policy
    • DMCA
© 2025 asiatokenfund.com - All Rights Reserved!

Type above and press Enter to search. Press Esc to cancel.

Ad Blocker Enabled!
Ad Blocker Enabled!
Our website is made possible by displaying online advertisements to our visitors. Please support us by disabling your Ad Blocker.