Close Menu
AsiaTokenFundAsiaTokenFund
  • Home
  • Crypto News
    • Bitcoin
    • Altcoin
  • Web3
    • Blockchain
  • Trading
  • Regulations
    • Scams
  • Submit Article
  • Contact Us
  • Terms of Use
    • Privacy Policy
    • DMCA
What's Hot

Federal Reserve Holds Rates as Bitcoin, Ethereum and XRP Crash: What the FOMC Decision Means for Crypto

March 18, 2026

Algorand Cuts 25% of Staff the Day After SEC Confirms ALGO Is Not a Security

March 18, 2026

Bitwise Found What’s Really Driving Ethereum Price

March 18, 2026
Facebook X (Twitter) Instagram
Facebook X (Twitter) YouTube LinkedIn
AsiaTokenFundAsiaTokenFund
ATF Capital
  • Home
  • Crypto News
    • Bitcoin
    • Altcoin
  • Web3
    • Blockchain
  • Trading
  • Regulations
    • Scams
  • Submit Article
  • Contact Us
  • Terms of Use
    • Privacy Policy
    • DMCA
AsiaTokenFundAsiaTokenFund

Together AI Upgrades Fine-Tuning Platform With Vision and Reasoning Support

0
By Aggregated - see source on March 18, 2026 Blockchain
Share
Facebook Twitter LinkedIn Pinterest Email


Joerg Hiller
Mar 18, 2026 18:27

Together AI adds tool calling, reasoning traces, and vision-language fine-tuning to its platform, with 6x throughput gains for 100B+ parameter models.





Together AI rolled out a major expansion to its fine-tuning service on March 18, adding native support for tool calling, reasoning traces, and vision-language models—capabilities that address persistent pain points for teams building production AI systems.

The update arrives as the company reportedly negotiates a funding round that would value it at $7.5 billion, more than doubling its $3.3 billion valuation from its February 2025 Series B.

What’s Actually New

The platform now handles three categories of fine-tuning that previously required fragmented workarounds:

Tool calling gets end-to-end support using OpenAI-compatible schemas. The system validates that every tool call in training data matches declared functions before training begins—a safeguard against the hallucinated parameters and schema mismatches that plague agentic workflows.

Reasoning fine-tuning allows teams to train models on domain-specific thinking traces using a dedicated reasoning_content field. This matters because reasoning formats vary wildly across model families, making consistent training difficult without standardization.

Vision-language fine-tuning supports hybrid datasets mixing image-text and text-only examples. By default, the vision encoder stays frozen while language layers update, though teams can enable joint training when visual pattern recognition needs improvement.

Infrastructure Upgrades

Beyond new capabilities, Together AI claims significant performance gains from optimizing its training stack for mixture-of-experts architectures. The company integrated SonicMoE kernels that overlap memory operations with computation, plus custom CUDA kernels for loss computation.

Results vary by model size: smaller models see roughly 2x throughput improvements, while larger architectures like Kimi-K2 hit 6x gains. The platform now handles datasets up to 100GB and models exceeding 100 billion parameters.

New models available for fine-tuning include Qwen 3.5 variants (up to 397B parameters), Kimi K2 and K2.5, and GLM-4.6 and 4.7.

Practical Additions

The update includes cost estimation before job execution and live progress tracking with dynamic completion estimates—features that sound basic but prevent the budget surprises that make experimentation risky.

XY.AI Labs, cited by Together AI as a customer example, reported moving from weekly to daily iteration cycles while cutting costs 2-3x and improving accuracy from 77% to 87% using the platform’s fine-tuning and deployment APIs.

Market Context

The timing aligns with a surge in AI infrastructure spending. Startup funding in the AI sector hit $220 billion in the first two months of 2026, per recent reports, with much of that capital flowing toward training and inference infrastructure.

Together AI positions itself as an alternative to building in-house AI infrastructure, offering access to over 200 open-source models through its platform. The company’s pitch—removing infrastructure complexity so teams can focus on product development—now extends to increasingly sophisticated post-training workflows that were previously the domain of well-resourced research labs.

Image source: Shutterstock


Credit: Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

OpenAI Codex Integrates Figma as AI Coding Tool Hits 1M Weekly Users

March 18, 2026

Beyond Analysis: A Hunter’s Epistemology of “Seeing” and “Acting” in Capital Markets

March 18, 2026

Deconstructing and Reconstructing Rationality: The Philosophical Dimension of “Present-Moment Practice” in Capital Markets

March 18, 2026
Leave A Reply Cancel Reply

What's New Here!

Federal Reserve Holds Rates as Bitcoin, Ethereum and XRP Crash: What the FOMC Decision Means for Crypto

March 18, 2026

Algorand Cuts 25% of Staff the Day After SEC Confirms ALGO Is Not a Security

March 18, 2026

Bitwise Found What’s Really Driving Ethereum Price

March 18, 2026

Together AI Upgrades Fine-Tuning Platform With Vision and Reasoning Support

March 18, 2026
AsiaTokenFund
Facebook X (Twitter) LinkedIn YouTube
  • Home
  • Crypto News
    • Bitcoin
    • Altcoin
  • Web3
    • Blockchain
  • Trading
  • Regulations
    • Scams
  • Submit Article
  • Contact Us
  • Terms of Use
    • Privacy Policy
    • DMCA
© 2026 asiatokenfund.com - All Rights Reserved!

Type above and press Enter to search. Press Esc to cancel.

Ad Blocker Enabled!
Ad Blocker Enabled!
Our website is made possible by displaying online advertisements to our visitors. Please support us by disabling your Ad Blocker.