Build reliable, portable compiler pipelines with MLIR to power real AI and systems workloads.
Modern teams need compilers that keep high-level intent while targeting CPUs, GPUs, and specialized accelerators. Toolchains change fast, and ad hoc IRs make projects brittle and hard to maintain. This book gives you a practical path to design, test, and ship MLIR-based pipelines that scale from research to production without rewrites.
You will learn how to express domain semantics as dialects, transform programs with reusable patterns, and lower progressively to stable backends. Along the way, you will use the pass manager, verification, bufferization, and GPU paths to build pipelines that are reproducible, testable, and fast.
Define custom dialects, operations, types, attributes, traits, and interfacesWrite rewrites with PatternRewriter and PDLL, and compile PDLL to PDLCompose pass pipelines with verification, timing, stats, and crash reproducersUse Linalg, Tensor, and Vector dialects for tiling, fusion, and vectorizationApply bufferization and lifetime management with one-shot and alias analysisModel control flow with scf and cf, and schedule transformations with Transform IRTarget GPUs using gpu and nvgpu, then lower to nvvm, PTX, SPIR V, and ROCmWork with Sparse Tensor, sparsification pipelines, and the sparse runtimeBuild quantization flows with the quant dialect and StableHLO, then lower to integer kernelsIntegrate with frameworks via Torch-MLIR, StableHLO, and TOSADeliver end-to-end pipelines to IREE, Vulkan, and CPU backends, including EmitCPackage and distribute with MLIR bytecode, link runtimes, and manage ABI concernsDebug and profile generated code with logs, traces, and deterministic reproducersAutomate reproducibility with pinned LLVM versions, CI, and chapter-aligned testsThis is a code-focused guide with runnable MLIR, PDLL, TableGen, C, and Python examples that map directly to real pipelines.
Get the toolkit you need to ship robust MLIR systems-grab your copy today.