Transforming Windows Apps: NVIDIA and Microsoft Lead the Way with Generative AI
In a groundbreaking partnership, NVIDIA and Microsoft are revolutionizing the landscape of computing by leveraging generative AI technology. This innovative approach, fueled by large language model (LLM) applications such as ChatGPT, image generators like Stable Diffusion and Adobe Firefly, and game rendering techniques such as NVIDIA DLSS 3 Frame Generation, is ushering in a new era of productivity, content creation, and gaming.
At the recent Microsoft Build developer conference, NVIDIA and Microsoft unveiled a range of advancements in Windows 11 PCs and workstations powered by NVIDIA RTX GPUs, specifically designed to meet the evolving demands of generative AI. Over 400 Windows apps and games already integrate AI technology, leveraging the accelerated processing capabilities of Tensor Cores, specialized processors embedded within RTX GPUs. The latest announcements include comprehensive tools for AI development on Windows PCs, frameworks for optimizing and deploying AI models, and driver enhancements for improved performance and energy efficiency. These advancements empower developers to create the next generation of Windows applications with generative AI as their core.
Historically, AI development has primarily taken place on Linux, necessitating developers to either dual-boot their systems or rely on multiple PCs to work simultaneously on their AI projects while still accessing the rich Windows ecosystem. However, Microsoft has made significant strides in bridging this gap through the development of Windows Subsystem for Linux (WSL), an advanced capability that enables running Linux directly within the Windows OS. In collaboration with NVIDIA, Microsoft has now integrated GPU acceleration and comprehensive support for the entire NVIDIA AI software stack into WSL. Consequently, developers can leverage Windows PCs for all their local AI development needs, including GPU-accelerated deep learning frameworks on WSL.
The inclusion of NVIDIA RTX GPUs with up to 48GB of RAM in desktop workstations further empowers developers, as they can now work on Windows with models that were previously exclusive to server environments. The increased memory capacity not only enhances performance and quality for local fine-tuning of AI models but also allows designers to customize models to suit their individual style or content. Moreover, since the same NVIDIA AI software stack is compatible with NVIDIA data center GPUs, developers can seamlessly transition their models to Microsoft Azure Cloud for large-scale training runs.
To address the need for efficient inference performance, especially in
laptops, NVIDIA is introducing a new feature called Max-Q low-power
inferencing for AI workloads on RTX GPUs. This feature optimizes Tensor
Core performance while minimizing GPU power consumption, leading to
extended battery life, a cool system, and a quieter user experience. The
GPU dynamically scales up for maximum AI performance when the workload
demands it.
Leading software developers, including Adobe, DxO, ON1, and Topaz, have already integrated NVIDIA AI technology into more than 400 Windows applications and games, optimizing them for RTX Tensor Cores.
NVIDIA and Microsoft are committed to providing developers with resources to explore and experience top generative AI models on Windows PCs. An Olive-optimized version of the Dolly 2.0 large language model is available on Hugging Face, while a PC-optimized version of NVIDIA NeMo large language model for conversational AI will be released soon. Developers can also access comprehensive guidance on optimizing their applications end-to-end for GPU acceleration through the NVIDIA AI for accelerating applications developer site.
The synergistic combination of Microsoft's Windows platform and NVIDIA's dynamic AI hardware and software stack enables developers to effortlessly develop and deploy generative AI on Windows 11, ushering in a new era of innovation and possibilities.
Comments
Post a Comment