Skip to main content

Your submission was sent successfully! Close

Thank you for signing up for our newsletter!
In these regular emails you will find the latest updates from Canonical and upcoming events where you can meet our team.Close

Thank you for contacting us. A member of our team will be in touch shortly. Close

An error occurred while submitting your form. Please try again or file a bug report. Close

  1. Blog
  2. Article

Philip Williams
on 22 November 2021

Dell EMC PowerEdge and Canonical Charmed Ceph, a proven solution


Here at Canonical, we have lots of industry partnerships where we work jointly, hand-in-hand, to produce the best possible outcomes for the open source community. From getting early access to next generation hardware to ensure Ubuntu is fully compatible when it’s released, to creating solution orientated reference architectures for products built on top of Ubuntu like Charmed Ceph, Canonical is committed to engineering the best possible computing experience.

Recently, our product management and hardware alliances teams came together with Dell Technologies to collaboratively define, test, and validate a Dell EMC PowerEdge based Charmed Ceph reference architecture.

Reference architecture

The goal of this exercise was to produce a guide to building a capacity orientated Ceph cluster that could be used for block (RBD), file (CephFS) or object (Swift or S3) workloads, and demonstrate the performance that can be achieved with similar hardware.

We took relatively standard components (four Dell EMC R740xd2 servers with Intel Xeon processors and NICs, a few SSDs, and lots of high capacity NL-SAS disks) and connected them all together with 25GbE networking.

The R740xd2 provides an ideal building block for Ceph clusters due to its highly configurable nature, which allows users to make performance, capacity, and price adjustments as needed. For example, to create a higher performance cluster, the CPUs could be swapped for another model that has more cores and cache, and the disks could be changed to NVMe and/or SSD if required.

Learn more

During this exercise, we tested the performance of the cluster with various different workloads, such as small block and large block, with and without bcache.

We also demonstrated the scalability of Ceph, by adding an extra storage node and re-running the performance tests to show the improvement in cluster performance. We were able to achieve over 75,000 random read IOPs, and over 6 GBps sequential read from a 4 node capacity orientated cluster, as well as demonstrating how our unique OSD deployment approach using bcache can provide up to 2.5x improvement in performance for small block workloads.

All of the test results and detailed hardware architecture information can be found in the whitepaper on Dell Technologies InfoHub, here. We also discussed our findings in this webinar which can be watched back on-demand.

Related posts


Philip Williams
19 December 2025

MicroCeph: why it’s the superior MinIO alternative (and how to use it)

Ceph Article

Recently, the team at MinIO moved the open source project into maintenance mode and will no longer accept any changes. That means that no new features or enhancements will be added to MinIO, and existing issues — according to the update — will not be actively considered. Whilst MinIO brought a solid developer-friendly approach to ...


Canonical
22 July 2025

Native integration available for Dell PowerFlex and Canonical LXD

Cloud and server Article

The integration delivers reliable, cost-effective virtualization for modern IT infrastructure  Canonical, the company behind Ubuntu, has collaborated with Dell Technologies on a native integration between Canonical LXD and Dell PowerFlex software-defined infrastructure. The combined solutions for open source virtualization and high-perfor ...


David Beamonte
19 December 2025

A better way to provision NVIDIA BlueField DPUs at scale with MAAS

Cloud and server MAAS

The recent release of MAAS 3.7 introduces a significant new capability: the ability to provision NVIDIA BlueField Data Processing Units (DPUs) directly through their Baseboard Management Controller (BMC), ...