Skip to content

Blog

Announcing M4 Pro availability for Cirrus Runners

TLDR: Cirrus Runners for macOS are now powered by the latest M4 Pro chips. These latest chips have 50% better single core performance and 2.7x faster memory compared to previously used regular M2 chips. All customers have been migrated to the new infrastructure at no extra cost.

We are excited to kick 2025 with a huge announcement! Last year we had tremendous growth among enterprise customers which trusted Cirrus Runners platform and enjoyed our unique fixed pricing for unlimited usage.

Our unique pricing model, where customers pay for job concurrency rather than per-minute usage, sets us apart in the CI/CD market. This approach creates a perfect alignment of interests - as an infrastructure provider, we are naturally motivated to make your builds as fast as possible, since faster job completion means we can better utilize our hardware. This stands in contrast to traditional per-minute billing models where slower builds result in higher costs for customers.

When Apple announced their latest generation of M4 Pro chips last November, we saw an opportunity to significantly upgrade our infrastructure. Thanks to the long-term partnerships with our enterprise customers, we were able to make a substantial investment as a bootstrapped company. This investment allowed us to completely replace our existing M2 infrastructure with M4 Pro chips, setting us up for exceptional performance over the next two years.

Mini M4 Pro

Speeding up caching on Cirrus Runners

Adding Cirrus Runners to your GitHub project has a great benefit of increasing the build speed, yet reducing the costs and making spendings much more predictable. However, Cirrus Runners act as self-hosted runners which means that the runners now are outside the GitHub infrastructure hosted predominantly on Azure, and this comes at a cost of cache accesses potentially being slower than they could be.

Do you use actions/cache in your workflows? Then this might be especially of interest to you.

Optimizing startup time of Cirrus Runners

We recently added OpentTelemetry Traces alongside Metrics in order to gain a deeper understanding of areas for potential optimizations.

Previously with only high level metrics we had a limited visibility into how GitHub Actions jobs are running on Cirrus Runners platform. We only measured several executing related metrics: delivery lag of GitHub webhooks, scheduling time of a job within Cirrus Runners Scheduler and how long it takes to actually start a single use virtual machine to execute a job. Here is a schema that represents our previously limited visibility:

Execution Schema