Why Bizix Builds This
Public cloud is the right answer for plenty of workloads. It is the wrong answer for a growing list of others — sovereign data, predictable cost, hard latency, regulatory constraint, or simply being the seventh time a vendor has unilaterally moved your bill.
Bizix designs, builds, and operates hyperconverged compute and storage clusters for organisations that need cloud economics with on-prem control. Each cluster is designed by the same engineers who will support it. That is the entire model.
What You Get
- Hyperconverged Architecture: Compute and storage live in the same chassis. No external SAN, no separate fabric, no third-party array vendor in the path.
- Distributed Storage, Custom-Tuned: A distributed object-storage fabric replaces traditional RAID arrays — engineered with custom OSD layout, separated cluster and client networks, and tuning passes that move it well past a default install.
- Bizix Operator UI: A purpose-built management plane for cluster operations — designed for the daily reality of running infrastructure, not the once-a-quarter compliance audit.
- Open-Source Foundations: The hypervisor and storage layers are open-source. No per-socket licence fees, no opinionated vendor migration paths, no nasty renewal letters.
- Phased Deployment: Production-grade clusters are stood up in documented phases — from boot media and BIOS, through cluster formation, into storage tuning, hardening, and the first VM. Every step has a runbook.
- Sovereignty: Hardware in your premises (or our Australian floor), data on disks you own, supported by Australian engineers in Australian time zones.
Architecture At A Glance
| Layer | Detail |
|---|---|
| Form Factor | Multi-node enterprise blade chassis — compact, dense, redundant |
| Compute | Dual-socket nodes with ECC memory, sized to workload |
| Storage | Distributed object storage with separated cluster and client networks; custom OSD & placement tuning |
| Networking | Dual-homed 10Gb interconnect with VLAN segmentation across management, storage cluster, storage client, and tenant traffic |
| Hypervisor | Open-source virtualisation, no per-socket licensing |
| Management Plane | Bizix Operator UI — built in-house, operator-first |
| Resilience | Node loss tolerance, redundant networking, no single point of failure in the storage path |
| Designed In | Australia |
The Bizix Operator UI
Off-the-shelf hypervisor admin tools tend to one of two extremes: opaque enterprise consoles built around licensing checks, or developer-friendly UIs that assume you read source code for fun. Neither suits the ops team responsible for keeping the cluster up at 03:00.
The Bizix Operator UI is the management plane we wanted ourselves. Cluster-wide visibility, a clean operations workflow, role-aware permissions, and the on-call ergonomics of a tool designed by people who’ve been on call.
Custom Storage Tuning
A distributed storage fabric out of the box is functional but rarely optimal. Bizix performs custom tuning passes on every cluster — separating cluster replication traffic from client RBD traffic onto distinct network planes, sizing OSDs to physical drive characteristics, placing write-ahead log and metadata journals on flash where the workload demands it, and adjusting placement-group counts and CRUSH rules for the actual data shape, not the assumed one.
The result is a cluster that performs at production rather than benchmark levels — under load, under recovery, under partial-failure conditions.
Who This Is For
- Organisations leaving public cloud for cost-predictability reasons.
- Australian enterprise and government workloads with sovereignty obligations.
- Research facilities and remote sites where the WAN to a public cloud is unreliable, expensive, or both.
- Operators who’ve been burned by a vendor management plane and want one built by the people supporting it.
How A Bizix Cluster Goes In
Production-grade infrastructure does not appear by magic. Bizix deployments follow a phased runbook:
- Phase 0 — Boot Media: Per-node boot device installation; consumer USB sticks are explicitly ruled out.
- Phase 1 — Hypervisor Bring-Up: Per-node OS installation and base configuration.
- Phase 2 — Networking: VLAN segmentation, dual-homing, link-state validation across all nodes.
- Phase 3 — Cluster Formation: Quorum, replication factor, and storage-cluster bootstrap.
- Phase 4 — Storage Tuning: The custom-tuning pass — this is where Bizix earns its keep.
- Phase 5 — Hardening: Lock-down, telemetry, backups, monitoring integration.
- Phase 6 — First VM: The handover moment.