Categories
Misc

New Learning Pathway: Deploy AI Models with NVIDIA NIM on GKE

An image of two women working at a laptop.Get hands-on with Google Kubernetes Engine (GKE) and NVIDIA NIM when you join the new Google Cloud and NVIDIA community.An image of two women working at a laptop.

Get hands-on with Google Kubernetes Engine (GKE) and NVIDIA NIM when you join the new Google Cloud and NVIDIA community.

Source

Categories
Misc

SmolLM3: smol, multilingual, long-context reasoner

Categories
Misc

Asking an Encyclopedia-Sized Question: How To Make the World Smarter with Multi-Million Token Real-Time Inference

Helix Parallelism, introduced in this blog, enables up to a 32x increase in the number of concurrent users at a given latency, compared to the best known prior parallelism methods for real-time decoding with ultra-long context.
Read Article

Categories
Misc

Three Mighty Alerts Supporting Hugging Face’s Production Infrastructure

Categories
Misc

Efficient MultiModal Data Pipeline

Categories
Misc

Asking an Encyclopedia-Sized Question: How To Make the World Smarter with Multi-Million Token Real-Time Inference

Modern AI applications increasingly rely on models that combine huge parameter counts with multi-million-token context windows. Whether it is AI agents…

Modern AI applications increasingly rely on models that combine huge parameter counts with multi-million-token context windows. Whether it is AI agents following months of conversation, legal assistants reasoning through gigabytes of case law as big as an entire encyclopedia set, or coding copilots navigating sprawling repositories, preserving long-range context is essential for relevance and…

Source

Categories
Misc

NVIDIA cuQuantum Adds Dynamic Gradients, DMRG, and Simulation Speedup 

Decorative image.NVIDIA cuQuantum is an SDK of optimized libraries and tools that accelerate quantum computing emulations at both the circuit and device level by orders of…Decorative image.

NVIDIA cuQuantum is an SDK of optimized libraries and tools that accelerate quantum computing emulations at both the circuit and device level by orders of magnitude. With NVIDIA Tensor Core GPUs, developers can speed up quantum computer simulations based on quantum dynamics, state vectors, and tensor network methods by orders of magnitude. In many cases, this provides researchers with simulations…

Source

Categories
Misc

Turbocharging AI Factories with DPU-Accelerated Service Proxy for Kubernetes

As AI evolves to planning, research, and reasoning with agentic AI, workflows are becoming increasingly complex. To deploy agentic AI applications efficiently,…

As AI evolves to planning, research, and reasoning with agentic AI, workflows are becoming increasingly complex. To deploy agentic AI applications efficiently, AI clouds need a software-defined, hardware-accelerated application delivery controller (ADC). That enables dynamic load balancing, robust security, cloud-native multi-tenancy, and rich observability. F5 BIG-IP ADC for Kubernetes…

Source

Categories
Misc

LLM Inference Benchmarking: Performance Tuning with TensorRT-LLM

This is the third post in the large language model latency-throughput benchmarking series, which aims to instruct developers on how to benchmark LLM inference…

This is the third post in the large language model latency-throughput benchmarking series, which aims to instruct developers on how to benchmark LLM inference with TensorRT-LLM. See LLM Inference Benchmarking: Fundamental Concepts for background knowledge on common metrics for benchmarking and parameters. And read LLM Inference Benchmarking Guide: NVIDIA GenAI-Perf and NIM for tips on using GenAI…

Source

Categories
Misc

RAPIDS Adds GPU Polars Streaming, a Unified GNN API, and Zero-Code ML Speedups

RAPIDS, a suite of NVIDIA CUDA-X libraries for Python data science, released version 25.06, introducing exciting new features. These include a Polars GPU…

RAPIDS, a suite of NVIDIA CUDA-X libraries for Python data science, released version 25.06, introducing exciting new features. These include a Polars GPU streaming engine, a unified API for graph neural networks (GNNs), and acceleration for support vector machines with zero code changes required. In this blog post, we’ll explore a few of these updates. In September 2024…

Source