OVH Community, your new community space.

Dedicated server range 2011 1 of 6
05-04-2011, 11:56 AM
Finally! The RBX4 datacentre is pinging. For 3 weeks now it has had some server racks to make sure everything is okay. It is estimated that everything is stable and it is going to be prodded.

You can see the small site by visiting: then click on "<"

Cloud Computing alias Storage and Virtualisation!

There are some important changes over the 2011 range as its adapted to Cloud Computing, ie.: the 2 functions of Cloud - storage on the network and of course virtualisation. So we put the packet on the CPU and the RAM as well as network technologies to ensure the bandwidth ...

10x more bandwidth guaranteed for each server!

First, at the network level, we chose to use the "lossless" network as it is particularly well suited to Cloud Computing. In effect, it allows storage on the network without data corruption of the filesystem through a very special QoS management. Suddenly, there is no slowdown on the NAS / SAN or NFS or iSCSI. The flow is as steady as a local disk. To ensure lossless, we must also bring a lot of bandwidth to each server just to avoid saturation of ports and avoid slowing down data access. In practice? Per rack, we have 48 servers and now we connect each rack in 2x10G or 8x10G to the heart of the network. Just to compare: on the standard network at OVH in 2010 we had put 2x1Gbps. It works very well ... except in certain cases of storage on the network ... QED

In short, we can roughly summarise this in that:
- The SP range is standard at 100Mbps (uplink 2x1G)
- The EG range switches to 1Gbps lossless (uplink 2x10G)
- The MG range switches to lossless 10Gbps (uplink 8x10G)

The servers are connected to the passive connectors (FEX) in 1G and 10G. Each FEX is connected simultaneously to two rack row switches. And so the L2 connection works in high availability without spantree.
For L2 Multihome, in case of failure of one of 2 switches, everything else still works in the remaining L2/L3 switch. This is the technology that we use
on privateCloud except that on the pCC we have 2 physical networks for each server and then 4x10G or 16x10G uplink by rack ...

.... 2 / 4 email to follow ...