skip to Main Content
EOSphere Infrastructure Upgrades

EOSphere Infrastructure Upgrades

  • News

“A ‘bare-metal server’ is a computer server that is a ‘single-tenant physical server’. The term is used nowadays to distinguish it from modern forms of virtualization and cloud hosting.” – Wikipedia 2019

When we first announced our EOS Block Producer Candidacy in February 2018 we advised that we would be using hyperscale cloud providers for our infrastructure, so that we would be able to “right size” our deployment before we committed to our own physical equipment.

In September 2018 we decided to start investigating various technology options and Data Centre providers.

We had found cloud to be quite capable up until that point and certainly you can pretty much service any workload you decide to deploy…. but the more resources your service demands, the higher the price and it can end up being very expensive.

Opinions on cloud hosting vs bare-metal are quite varied depending on who you speak to. Running in a provider’s cloud means you may share tenancy on hardware impacting performance especially if you have a “noisy neighbor”, it also means that you don’t have complete control of the actual infrastructure deployment including security and privacy which is hidden from you as a client on their service. Cloud of course allows you to quickly deploy, remove, scale and downsize with a few mouse clicks and certainly makes running a Data Centre as easy as it can get without possibly needing any advanced technical expertise.

In November 2018 we tested various hardware and topology options and finally decided on a direction for a move to our own bare-metal deployment. The main driver for our decision to move ended up quite simply being price and higher performance. This move means that we are responsible for our deployment in its entirety meaning BIOS level access on servers through to individual policies on our Next Gen Firewall Clusters… this requires the technical expertise I mentioned before.

Over December we proceeded with our migration off cloud onto our own equipment and are very happy to announce that as of the end of January 2019 we have now completed this move in its entirety.

The Solution

Data Centres

Our infrastructure deployment runs across two geographically separate Data Centre’s in Sydney Australia.


  • N+1 Power Redundancy
  • N+1 Cooling Redundancy
  • SSAE16/ISAE3402 SOC-1 Type II
  • ISO 27001
  • PCI DSS 2


  • N+1 Power Redundancy
  • N+1 Cooling Redundancy
  • SSAE16/ISAE3402 SOC-1 Type II
  • SSAE16/ISAE3402 SOC-2 Type II
  • ISO 27001
  • PCI DSS 2


In order for us to structurally scale across multiple blockchain networks while ensuring security and performance, we have created our own building block approach demonstrated in the high-level topology below.


Client API requests come in from diverse upstream services providers with DDoS protection landing on a cluster of loadbalancers which are located across both SY3 and SY4. These loadbalancers provide SSL offloading and acceleration forwarding HTTP requests in an intelligent highly available round robin fashion to the relevant EOSIO API servers.

API Nodes

Each blockchain has its group of API nodes located across each Data Centre to ensure high availability. These servers have fast enterprise class Intel XEON CPU’s running in SuperMicro servers, a large amount of RAM and most importantly Samsung EVO 970 NVMe high speed storage to ensure swift access to disk data required by nodeos.


The internal private network is made up of multiple VLANs to keep a degree of logical separation between services. Physical servers have Gigabit links with the backbone being 10 Gigabit. VLANs are spanned across both SY3 and SY4, even though they are over 1km apart latency is extremely low.

Block Producer Nodes

Our Block Production nodes run on quick current Generation Intel XEON E3-1240 v6 CPU’s with DDR4 RAM also using SuperMicro servers. Although not the most expensive CPU’s, with us having BIOS level access we are able to adjust the processor performance settings giving us a sustained clock speed of 4.0 – 4.1GHz with excellent results. Below are our Jungle Testnet performance metrics which are reliably in the 1.1ms range.


We have two Juniper SRX firewall clusters also spanned across both SY3 and SY4. These firewalls handle all Network Address Translation (NAT) and Firewall Policies for all internet communication other than the API requests which are on a separate Internet feed and through the loadbalancers. The Internet feed to the firewalls is also provided by diverse upstream providers with DDoS protection.

Other Services

I’m sure you would realise that a Data Centre is not just made up of API and BP nodes, we also have a monitoring and management as well as a development environment for EOSphere Dev amongst other breakthrough EOSIO services such as privEOS and a VMWARE vSphere Deployment.

I hope this article leaves you as comfortable with our new Infrastructure and Deployment as well as our capability to maintain it as we are. As more and more EOSIO services and sister / side chains are requested by the ecosystem only time will tell how large our deployment will get, but we do know that we are absolutely ready for it.

Back To Top