SecurityBrief New Zealand - Technology news for CISOs & cybersecurity decision-makers
Photorealistic data servers container crates edge city glowing network connections edge computing

Cloudflare launches Containers beta for flexible edge computing

Today

Cloudflare has announced the public beta release of its Containers product, enabling developers to execute code in a secure, isolated environment as part of its connectivity cloud services.

The company said Containers are now accessible to all users on paid plans, providing a platform where applications such as media processing, backend services, and command-line interface tools can run at the edge of the network or in batch workloads.

The integration with Cloudflare Workers means developers maintain a simple workflow using familiar tools.

Cloudflare Containers are designed to extend the existing Workers platform by allowing more compute-intensive and flexible tasks. Developers can deploy globally without needing to manage configuration across multiple regions.

They also have the option to choose between using Workers for lightweight requests or Containers for tasks that require greater resources and full Linux compatibility. The company highlighted the ability to run commonly used developer tools and libraries that were not previously available in the Workers environment.

The workflow for deploying applications remains straightforward.

Developers define a Container in a few lines of code and deploy it using existing tools. Cloudflare handles the routing, provisioning, and scaling, deploying containers in optimal locations across its global network for reduced latency and rapid start times. This is designed to enable use cases such as code sandboxing, where each user or AI-generated session requires a securely isolated environment, a scenario already adopted by some users including Coder.

Configuration is managed via the Container class and a configuration file. Each unique session triggers a new container instance, and Cloudflare automatically selects the best available location to minimise response times for end-users. Initial startup times for containers are typically just a few seconds, according to the company.

During development, wrangler dev allows for live iteration on container code, with containers being rebuilt and restarted directly from the terminal. For production deployment, developers use wrangler deploy, which pushes the container image to Cloudflare's infrastructure, handling all artefact management and integration processes automatically so developers can focus solely on their code.

Observability and resource tracking are built into the Containers platform. Developers can monitor container status and resource usage through the Cloudflare dashboard, with built-in metrics and access to real-time logs. Logs are retained for seven days and can be exported to external sinks if needed.

Application range

Cloudflare pointed to a range of new applications enabled by Containers, such as deploying video processing libraries like FFmpeg, running backend services in any language, setting up routine batch jobs, or hosting a static frontend with a containerised backend. Integration with other Cloudflare Developer Platform services—including Durable Objects for state management, Workflows, Queues, Agents, and object storage via R2—expands potential application architectures.

"We're excited about all the new types of applications that are now possible to build on Workers. We've heard many of you tell us over the years that you would love to run your entire application on Cloudflare, if only you could deploy this one piece that needs to run in a container."
"Today, you can run libraries that you couldn't run in Workers before. For instance, try this Worker that uses FFmpeg to convert video to a GIF. Or you can run a container as part of a cron job. Or deploy a static frontend with a containerized backend. Or even run a Cloudflare Agent that uses a Container to run Claude Code on your behalf. The integration with the rest of the Developer Platform makes Containers even more powerful: use Durable Objects for state management, Workflows, Queues, and Agents to compose complex behaviors, R2 to store Container data or media, and more."

Pricing details

The Containers platform is available in three instance sizes at launch—dev, basic, and standard—ranging from 256 MiB to 4 GiB of memory and fractional vCPU allocation. Cloudflare charges based on actual resource usage in 10-millisecond increments.

Memory is billed at USD $0.0000025 per GiB-second with a 25 GiB-hour monthly allowance, CPU at USD $0.000020 per vCPU-second with 375 vCPU-minutes included, and disk usage at USD $0.00000007 per GB-second with 200 GB-hour included. Network egress rates vary between USD $0.025 per GB for North America and Europe, up to USD $0.050 per GB for Australia, New Zealand, Taiwan, and Korea, with included data transfer varying by region.

Charges begin when a container is active and end when it automatically sleeps after a timeout, aiming to ensure efficient scaling down for unpredictable workloads. The company plans to expand available instance sizes and increase concurrent limits over time to support more demanding use cases.

Roadmap

Cloudflare outlined upcoming features for Containers, including higher memory and CPU limits, global autoscaling, latency-aware routing, enhanced communication channels between Workers and Containers, and deeper integrations with the broader developer platform. Plans are underway to introduce support for additional APIs and easier data storage access.

"With today's release, we've only just begun to scratch the surface of what Containers will do on Workers. This is the first step of many towards our vision of a simple, global, and highly programmable Container platform."

"We're already thinking about what's next, and wanted to give you a preview: Higher limits and larger instances... global autoscaling and latency-aware routing... more ways for your Worker to communicate with your container... further integrations with the Developer Platform — We will continue to integrate with the developer platform with first-party APIs for our various services. We want it to be dead simple to mount R2 buckets, reach Hyperdrive, access KV, and more. And we are just getting started. Stay tuned for more updates this summer and over the course of the entire year."

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X