TechnologyGourab Patra12 Mar 2026

Mar 12: embedUR systems, a Silicon Valley–based embedded software and Edge AI engineering company, announced a significant expansion of Arm® ecosystem support within ModelNova™ Fusion Studio, its desktop application for end-to-end Edge AI development. The announcement, made at Embedded World 2026 introduces Fusion Studio’s deep integration with Arm’s most widely adopted Edge AI technologies.
Built for scalable edge AI development
Edge AI development spans a diverse landscape of silicon platforms, software frameworks and toolchains. Developers building on these platforms have historically faced a fragmented workflow — assembling separate model conversion utilities, platform-specific compilers, quantization scripts, and deployment tools before they can bring an AI workload to production.
Fusion Studio is designed to address this directly. The platform integrates support for Arm Ethos™ NPU architectures, toolchains, and frameworks throughout the development lifecycle — from model training and optimization through firmware build and on-device deployment.
Arm-native capabilities in Fusion Studio
The latest release of Fusion Studio introduces several capabilities purpose-built for the Arm Edge AI ecosystem:
Expanding on Fusion Studio
The new Arm-native capabilities build on a platform that has been available in beta since 2025. Fusion Studio currently offers a library of over 160 pre-trained and optimized AI models spanning computer vision, audio, and text generation, with 21 industry-specific starter packs designed to accelerate proof-of-concept development. The platform supports deployment to Raspberry Pi 4 and 5 with integrated camera support and real-time inference and runs on PC (CPU/CUDA) and macOS with Apple Silicon host machines.
Recent beta releases introduced LLM-powered AI Assistance that provides contextual guidance across model selection, dataset evaluation, training configuration, platform optimization, and troubleshooting — along with training checkpoint and continuation support for interrupted workflows.
Looking ahead
The ModelNova team from embedUR systems continues to invest in expanding Fusion Studio’s capabilities across the Arm ecosystem. Areas of active development include broader virtual platform support for additional Arm reference designs, GPU server offload to accelerate model training times, expanded audio capture and annotation workflows, and the ability to evaluate models directly on host PC and Apple hardware before deploying to target devices. These capabilities are expected to become available through upcoming releases in 2026.
See it at Embedded World 2026
embedUR will demonstrate the full Arm-integrated Fusion Studio workflow at Embedded World 2026, taking place March 10–12 at the Nuremberg Messe in Germany. Visit the ModelNova booth at Hall 4, Booth 4-600 to see ExecuTorch model deployment on Arm Ethos-U NPUs, end-to-end development workflows on Ensemble Series hardware, and the complete in-tool MLOps pipeline. Additionally, a Fusion Studio–built model will be running at the Arm booth (Hall 4, Booth 4-504), demonstrating the production-readiness of models developed and deployed through the platform.
Executive commentary
“The Arm compute platform is where Edge AI is being built — across industries, across silicon vendors”, said John Marconi, VP of Technology and Architecture at embedUR systems. “We designed Fusion Studio to meet developers where they are, with native support for the tools and frameworks the Arm ecosystem already relies on. This release reflects that commitment.”
“To unlock the full potential of edge AI, developers need streamlined workflows and production-ready tools,” said John Thompson, Senior Director of Edge AI Software at Arm. “We are laser focused on enabling developer-friendly solutions across the Arm ecosystem to simplify software development and deployment. By advancing more integrated tooling on the Arm compute platform, embedUR is reducing complexity, accelerating innovation, and supporting teams to move from prototype to production more efficiently.”