OVH Community, your new community space.

2/8 - Dedicated Server-2014: Infrastructure

10-29-2013, 01:24 PM
Follow up to:
(other posts are being translated as fast as possible)

Where to start? We will tackle a few details before going back to the general information.

On the OVH website, we will be offering 3 ranges of servers that will meet the 3 principal needs that we have identified among our customers.

One of those needs was a mixture of one option that we were offering, the vRack, and a new one consisting of servers offered without the public network. This means only from the private network.

We have requests from customers looking to outsource their internal infrastructure to us. They have servers, VMs, firewalls, load balancers, etc. which are all configured together. We're talking about architectures of 3-tiers or even more. This means that when a client requests info on one server, this server will request it from a 3rd one (a database) or even more.
All these servers function together across (several) private networks, with or without a firewall between them. For example, take the PCI - DSS, an industry standard for hosting bank details.

In technical terms, these are the servers with vRack 1.5/2.0, meaning a private network card, capable of supporting several private VLANs for one single customer.

But that's not all. We have requests from customers wanting to interconnect these 3-tier architectures with their datacentres, and to make a large private network between them and us. So it's a question of extending the private network of the customer between our datacentres, across our network passing via 17 PoPs across the world, and being able to interconnect these customers, that arrive with their fibre optic in 100M, 1G, 10G and 40G, in these 17 PoPs, and/or in multi-point to guarantee redundancy. And always in a private manner, with several private networks.

This means that these customers do not want to have a public network. Nor do they need a public IP. Why? Because the projects and the data hosted are confidential. Let's talk about workstations (DaaS = Desktop as a Service) and BigData with sensitive information, such as accountancy and finance.

In addition, customers want the data are stored in several datacentres, for redundancy purposes and also for access latency. Indeed, if the customer has offices in Germany, France and Canada, they will want to access the data with local latency and maximum speed. Replication of data, in multiple DCs, countries and continents.

We're also talking about BCP and DRP projects:
- Business continuity plan (BCP)
- Disaster Recovery Plan (DRP)
This means that our customers want to use the infrastructures in our 4 datacentres zones (SBG, RBX, GRA and BHS) and thus guarantee the BCP in a few seconds/minutes when switching production from one infrastructure to another. E.g, they have one infrastructure in RBX and the backup permanently updated in SBG. If RBX goes down, it switches to SBG. At DRP level, we're often working on the backup of an internal infrastructure that can take over the business within a few hours.

In these use cases, the customer often also wants to mix and match theservices that we offer. Dedicated server with Dedicated Cloud and Public Cloud. This means making the BCP/DRP for as little moneyas possible and then when it's really necessary to launch BCP/DRP, they want to do it as fast as possible, but maybe for only a few days. Combining technology brings flexibility in the creating multi-DC infrastructures.

It's also very interesting for customers that need the public network. They will be able to use the private network to manage all exchanges between the servers, and also to mix the technologies and services that we offer and thus use Dedicated Cloud and Public Cloud at the same time. In fact, the customer often chooses Dedicated Cloud to set up an infrastructure quickly, with several tens/hundreds of VMs communicating between each other. In some cases, for example with databases, the customer wants to use the dedicated server. Why? To get maximum disk I/O but wanting this server to be in the same private network as the Dedicated Cloud. So we mix things. Then an activity peak comes and so adding VMs is required, not on the Dedicated Cloud, but rather on the Public Cloud for few hours/days. The 3 services thus function together and at dedicated server level, we have designated 'Infrastructure' servers to do this job.

We can also mention Hybrid Cloud, where the customer mixes the Private Cloud that they manage on their side, with the Dedicated Cloud and Public Cloud on our side, all being on the same private network between our datacentres and their offices or datacentre.

We thus started with a clean page to design the servers connected on the public and private network, with a reflection that includes servers with only the private network (but redundant).

We're talking about servers in 2 x 1G and 2 x 10G, but with mixed configurations:
-1 x 1G to the public network and 1 x 1G to the private
- 2 x 1G to the private network in LAG, or in 2 physical private networks
- we are seeing whether 4 x 1Gbps makes sense.

The "infrastructure" range consists of 3 "sub-ranges":
- Tier-3 server
- Tier-4 server
- Tier-4 BIG server

The Tier-3 servers have:
- 2 x 1Gbps
- 1 power inlet in the rack
- 1 power supply per server
- disk coldswapping (the server must be shut down to change it)
- 1 CPU

The Tier-4 servers have:
- 2 x 10G
- 2 power inlets in the rack, supplied with 2 completely different energy sources
- 2 redundant power supplies per server, each being able to power the server entirely
- disk hotswapping (broken disks can be changed instantly)
- 2 CPUs

The Tier-4 Big servers are like Tier-4, consisting of several chassis. So it's a question of also ensuring Tier-4 at the level of disks and thus connecting the chassis between them by ensuring redundancy on the SAS. The aim is to withstand faults or incidents without affeccting the service running on the server. What if an LSI card managing the disks breaks down? It's no big deal.
We're don't just mean Tier-3 or Tier-4 on the rack, but on the serveror service. And we can also take it to another level: to further increase the electrical chassis redundancy, we can provide the servers with 3 power supplies per server: 2 power feeds redundancy + 1 battery directly in the server which can guarantee the chassis power supply for 15 minutes. If everything goes down, the server continues to function with all of its chassis.

At network level, we're thinking of launching mixed networking, i.e. 1 NIC in the public network and 1 NIC in the private, but we already have requests for very high redundancy, guaranteeing LAGs (Link Aggregation Groups).
This enables us to guarantee very high availability of the network, but also to double the bandwidth between the server and the private network.
We are on very high availability.

As I said, we started with a clean sheet. We worked in 3 for 2 months, in the utmost secrecy. Yesterday we made a in-house presentation to the teams that have to prepare the presentations for customers and collect the feedback. And during the presentation, funnily enough, I realised that it was very similar to the EG/MG/HG/HG-BIG offers (!!)(!!) Following the discussions, we have thus decided to change the names "Tier-3", "Tier-4" and "Tier-4 BIG" and to re-use EG/MG/HG at infrastructure level. This will make things simpler for you, and it's true that the HG-MG-EG servers already 2 networks.

And so we have the "Infrastructure" range, made up of 3 "sub-ranges" of servers:
- EG/MGs which are Tier-3
- HGs which are Tier-4
- HG BIG which are Tier-4 BIG
and also
- Dedicated Cloud (pCC) which is Tier-4 BIG
- Public Cloud (pCI 2.0) which is Tier-3

In the infrastructure, you can manage several private networks (several VLANs per customer) and thus use the sub-network between the servers. The servers/VMs can be in the 4 datacentre zones that we manage:

We also offer:
- Managed Load Balancing, the service that enables the load to be balanced on:
a) Public IPs
b) Private IPs, i.e. meaning that in your internal architecture, you may need to create clusters that have to be used by other servers. (All privately.)

- Public/Private routing with a firewall

- The service NAT which makes it possible for the servers/VM with the private IP to leave on the internet

- The DHCP service that enables VMs to take a public or private IP

- The VPN that allows a private network to interconnect with:
a) a mobile or computer, and to guarantee mobility for your employees
b) an office or datacentre by using your public connection (ADSL, an ISP fibre)

- The Dedicated Connect that enables an office or datacentre to be connected by fibre optic, using our networ,k reaching 17 PoPs around the world. We're talking about dedicated 100M/1G/10G/40G. We thus have to build the network between one of our PoPs and your office or datacentre, we know we can offer it with our local partners in almost all the major cities of Europe, the USA and Canada.

At the level of server offers and prices, everything depends on the choice of disks and the type of RAID redundancy. This is why we have chosen to conduct a survey with a presentation of the precise range, then to offer the choice of 18 models in each range, with the prices next to them. The aim is to make the choice and narrow it down to 3 to 5 models, instead of 18. This will enable us to confirm which offers are interesting and which are not. We will release information on the servers and prices that you select from the "Infrastructure" range. In any case, this range will be launched at less than 100/month.

Best wishes,