Overcoming data centre scalability problems for NFV

via Flickr ©  dvanzuijlekom (CC BY-SA 2.0)

via Flickr © dvanzuijlekom (CC BY-SA 2.0)

  • Project Calico v1.0 out, designed to tackle scalability problems for NFV and data centers
  • Calico 1.0 proven on an OpenStack cluster of 10,000 virtual machines running across 500 compute hosts
  • Metaswitch gets major foothold with AT&T

Metaswitch Networks has  announced the availability of Calico v1.0, which it says will enable network operators of any size who are virtualizing themselves via SDN and NFV, to overcome data center scalability limitations.

Metaswitch has long championed the use of IP as the best way, in the long run, to link virtual machines within and between data centres. It argues that relying on Layer 2 solutions, such as Ethernet VLANS to do the linking, introduces a requirement to get to grips with overlays and can also hit scaling limits. So the Calico project was launched last year to implant this layer 3 flavour into OpenStack.

It uses an open source approach to integrate with cloud orchestration systems to achieve IP communication between virtual machines, containers or bare metal workloads.

The Calico team claims that it’s already able to demonstrate the power of this method, having instantiated a Calico-based OpenStack cluster of 10,000 virtual machines  running across 500 compute hosts, to prove the scaling ability. It’s also managed a ‘container’ deployment of Calico running up to 50,000 containers across 500 hosts with setup rates of over 20 containers-per-second, it claims.

Calico v1.0 is now available for the Icehouse, Juno and Kilo releases of OpenStack. Ubuntu 14.04-based installations are supported either via standard packages, Chef recipes, or Juju. Packages are also available for Red Hat Enterprise Linux (RHEL) 7. Mirantis distributions are supported via a certified Fuel 6.1 module. See more tech detail here.

Putting Project Calico to work

Metaswitch and Mesosphere have also just announced a joint project to integrate Project Calico with Apache Mesos, the Mesosphere Datacenter Operating System (DCOS), MesosDNS, Marathon and other frameworks.

The idea is to offer a simplified process for managing clusters of servers in a data center environment by allowing Mesos and Mesosphere DCOS users to build cloud-native applications without having to worry about distributed systems architectures. Metaswitch claims they will be able to initiate workloads using Marathon and other frameworks using a flat, routed IP network - and users won’t have to be networking experts to make it all work.

Metaswitch’s ‘openness’ approach appears to be paying off in terms of big carrier interest. It recently had its Perimeta Session Border Controller (SBC) adopted by  AT&T on a portion of its network as the giant telco sets out to virtualize 75 per cent of its network by 2020. These SBCs have traditionally been “hardware-based, rigid and expensive” says Metaswitch so the move indicates how serious AT&T is, not just about virtualizing, but about using open standards as much as possible, and in the process letting newer, smaller companies into its core supplier roster.

“There’s lots of open source building blocks that can be used to construct the NFV infrastructure, so there’s really no fundamental reason to embody a bunch of proprietary technology in the infrastructure,” said Metaswitch’s Martin Taylor in a recent conversation with TelecomTV’s Martyn Warwick (see NFV vision now being stood up and realised).

Martin says that telcos “accept that there will be a lot of proprietary software in the virtualized network functions, but [that because] the software will run over this open environment.. they see much less risk in deploying a proprietary network function from a smaller vendor because it’s just software and the qualification process is much less, as is the capex exposure - it's much easier and quicker to experiment… and try stuff out.”

Email Newsletters

Sign up to receive TelecomTV's top news and videos, plus exclusive subscriber-only content direct to your inbox.