Close Menu
AsiaTokenFundAsiaTokenFund
  • Home
  • Crypto News
    • Bitcoin
    • Altcoin
  • Web3
    • Blockchain
  • Trading
  • Regulations
    • Scams
  • Submit Article
  • Contact Us
  • Terms of Use
    • Privacy Policy
    • DMCA
What's Hot

Pi Network Crashes 25% Amid 8 Million Token Unlock, Will Binance Listing Save it?

May 13, 2025

Volatility Alert! US CPI Data On the Horizon—Will Bitcoin (BTC) Price Drop below $100K or Rise Back to $105K?

May 13, 2025

Why Crypto Market Is Down Today? Liquidation Wipes Out $500M

May 13, 2025
Facebook X (Twitter) Instagram
Facebook X (Twitter) YouTube LinkedIn
AsiaTokenFundAsiaTokenFund
ATF Capital
  • Home
  • Crypto News
    • Bitcoin
    • Altcoin
  • Web3
    • Blockchain
  • Trading
  • Regulations
    • Scams
  • Submit Article
  • Contact Us
  • Terms of Use
    • Privacy Policy
    • DMCA
AsiaTokenFundAsiaTokenFund

NVIDIA Unveils Generative AI-Powered Visual AI Agents for Edge Deployment

0
By Aggregated - see source on July 17, 2024 Blockchain
Share
Facebook Twitter LinkedIn Pinterest Email


Timothy Morano
Jul 17, 2024 18:22

NVIDIA introduces Vision Language Models (VLMs) for dynamic video analysis, enhancing AI capabilities at the edge with Jetson Orin platform.





An exciting breakthrough in AI technology—Vision Language Models (VLMs)—offers a more dynamic and flexible method for video analysis, according to NVIDIA Technical Blog. VLMs enable users to interact with image and video input using natural language, making the technology more accessible and adaptable. These models can run on the NVIDIA Jetson Orin edge AI platform or discrete GPUs through NIMs.

What is a Visual AI Agent?

A visual AI agent is powered by a VLM where users can ask a broad range of questions in natural language and get insights that reflect true intent and context in a recorded or live video. These agents can be interacted with through easy-to-use REST APIs and integrated with other services and mobile apps. This new generation of visual AI agents helps to summarize scenes, create a wide range of alerts, and extract actionable insights from videos using natural language.

NVIDIA Metropolis brings visual AI agent workflows, which are reference solutions that accelerate the development of AI applications powered by VLMs, to extract insights with contextual understanding from videos, whether deployed at the edge or cloud.

For cloud deployment, developers can use NVIDIA NIM, a set of inference microservices that include industry-standard APIs, domain-specific code, optimized inference engines, and enterprise runtime, to power the visual AI Agents. Get started by visiting the API catalog to explore and try the foundation models directly from a browser.

Building Visual AI Agents for the Edge

Jetson Platform Services is a suite of prebuilt microservices that provide essential out-of-the-box functionality for building computer vision solutions on NVIDIA Jetson Orin. Included in these microservices are AI services with support for generative AI models such as zero-shot detection and state-of-the-art VLMs. VLMs combine a large language model with a vision transformer, enabling complex reasoning on text and visual input.

The VLM of choice on Jetson is VILA, given its state-of-the-art reasoning capabilities and speed by optimizing the tokens per image. By combining VLMs with Jetson Platform Services, a VLM-based visual AI agent application can be created that detects events on a live-streaming camera and sends notifications to the user through a mobile app.

Integration with Mobile App

The full end-to-end system can now integrate with a mobile app to build the VLM-powered Visual AI Agent. To get video input for the VLM, the Jetson Platform Services networking service and VST automatically discover and serve IP cameras connected to the network. These are made available to the VLM service and mobile app through the VST REST APIs.

From the app, users can set custom alerts in natural language such as “Is there a fire” on their selected live stream. Once the alert rules are set, the VLM will evaluate the live stream and notify the user in real-time through a WebSocket connected to the mobile app. This will trigger a popup notification on the mobile device, allowing users to ask follow-up questions in chat mode.

Conclusion

This development highlights the potential of VLMs combined with Jetson Platform Services to build advanced Visual AI Agents. The full source code for the VLM AI service is available on GitHub, providing a reference for developers to learn how to use VLMs and build their own microservices.

For more information, visit the NVIDIA Technical Blog.

Image source: Shutterstock


Credit: Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

Town Star’s NFT Sale Brings Discounts on Epic and Legendary Items

May 13, 2025

Coinbase Stock Soars Over 10% on Imminent S&P 500 Debut

May 13, 2025

Strategy Scoops 13,390 Bitcoin After U.S.-China Tariff Truce

May 12, 2025
Leave A Reply Cancel Reply

What's New Here!

Pi Network Crashes 25% Amid 8 Million Token Unlock, Will Binance Listing Save it?

May 13, 2025

Volatility Alert! US CPI Data On the Horizon—Will Bitcoin (BTC) Price Drop below $100K or Rise Back to $105K?

May 13, 2025

Why Crypto Market Is Down Today? Liquidation Wipes Out $500M

May 13, 2025

Is Ethereum Dead And Gone? 

May 13, 2025
AsiaTokenFund
Facebook X (Twitter) LinkedIn YouTube
  • Home
  • Crypto News
    • Bitcoin
    • Altcoin
  • Web3
    • Blockchain
  • Trading
  • Regulations
    • Scams
  • Submit Article
  • Contact Us
  • Terms of Use
    • Privacy Policy
    • DMCA
© 2025 asiatokenfund.com - All Rights Reserved!

Type above and press Enter to search. Press Esc to cancel.

Ad Blocker Enabled!
Ad Blocker Enabled!
Our website is made possible by displaying online advertisements to our visitors. Please support us by disabling your Ad Blocker.