
moving already
Minimalist Async Evaluation
Framework for R
→ Event-driven core with microsecond round-trips
→ Hub architecture — scale dynamically from laptop to HPC and cloud
→ Production-ready distributed tracing, custom serialization, and Shiny integration
install.packages("mirai")mirai() evaluates an R expression asynchronously in a
parallel process.
daemons() sets up daemons: persistent
background processes that receive and execute tasks.
library(mirai)
# Set up 5 background processes
daemons(5)
# Send work -- non-blocking, returns immediately
m <- mirai({
Sys.sleep(1)
100 + 42
})
m
#> < mirai [] >
# Map work across daemons in parallel
mp <- mirai_map(1:9, \(x) {
Sys.sleep(1)
x^2
})
mp
#> < mirai map [0/9] >
# Collect results when ready
m[]
#> [1] 142
mp[.flat]
#> [1] 1 4 9 16 25 36 49 64 81
# Shut down
daemons(0)See the quick reference for a full introduction.
mirai() sends tasks to daemons for parallel
execution.
A compute profile is a set of connected daemons. Multiple profiles can coexist, directing tasks to different resources.
Hub architecture: host listens at a URL, daemons connect to it — add or remove daemons at any time. Launch locally or remotely via different methods, and mix freely:
Dynamic Architecture — scale on demand
- Host listens, daemons connect — true dynamic scaling
- Optimal load balancing via efficient FIFO scheduling
- Event-driven promises with zero-latency completion
Modern Foundation — built for speed
- NNG via nanonext — thousands of processes at scale
- Round-trip times in microseconds, not milliseconds
- IPC, TCP, and zero-config TLS certificates
Production First — reliable by design
- Explicit dependencies prevent hidden-state surprises
- Cross-language serialization (torch, Arrow, Polars)
- OpenTelemetry for distributed process observability
Deploy Everywhere — laptop to cluster
- Local machine, SSH remote, HPC cluster, or cloud platform
- Compute profiles direct tasks to best-fit resources
- Combine resources from any deployment type in a single profile
mirai has become the convergence point for asynchronous and parallel computing across the R ecosystem.
The first official alternative communications backend for R, a
parallel cluster type.
Powers parallel map for purrr, the tidyverse’s functional programming
toolkit.
Primary async backend for Shiny, with full ExtendedTask support.
Built-in async evaluator enabling the
@async tag in
plumber2.
Parallel processing backend for ragnar, a RAG framework for R.
Core parallel processing infrastructure provider for tidymodels.
Seamless use of torch tensors, models and optimizers across parallel
processes.
Query databases over ADBC connections natively in the Arrow data
format.
Native handling of Polars objects across parallel processes via
serialization hooks.
Powers targets pipelines via crew, a distributed worker launcher built
on mirai.
Will Landau for being instrumental in shaping development of the package, from initiating the original request for persistent daemons, through to orchestrating robustness testing for the high performance computing requirements of crew and targets.
Joe Cheng for integrating the ‘promises’ method to work seamlessly within Shiny, and prototyping event-driven promises.
Luke Tierney of R Core, for discussion on L’Ecuyer-CMRG streams to ensure statistical independence in parallel processing, and reviewing mirai’s implementation as the first ‘alternative communications backend for R’.
Travers Ching for a novel idea in extending the original custom serialization support in the package.
Hadley Wickham, Henrik Bengtsson, Daniel Falbel, and Kirill Müller for many deep insights and discussions.
mirai | nanonext | CRAN HPC Task View
–
Please note that this project is released with a Contributor Code of Conduct. By participating in this project you agree to abide by its terms.