Story image

Data centre expert reveals 8 key capabilities of intent-based networking

24 Jan 18

Article by CTO and founder of Apstra, Sasha Ratkovic

The main motivation behind intent-based networking (IBN) is to allow companies to run their networks reliably and cost-effectively while offering more agility and control, both in terms of new features and vendor choices. Sounds easy! The hard problem is in how to compose the complex infrastructure capabilities in order to serve business needs in the presence of constant change in device capabilities and business rules.

The composition problem is a consequence of the fact that today’s data centres act as scale-out computers and there is a need to compose this infrastructure consisting of compute, network, and storage. But this is only one dimension of this composition problem. Another, is how do you incorporate complex business rules and policies?

Infrastructure capabilities, as well as mechanisms to consume them, are subject to constant change. The situation with business rules is even worse, both in terms of the frequency and the complexity of the changes. Every time a change happens, you are required to perform some composition. If you take something out, is what is left still acting as a coherent whole? If you add or modify something, is the new composite valid?

With a single compute virtualisation node, the problem the operating system has to deal with involves partitioning of resources as well as dealing with isolation. But with the data centre acting as a scale-out computer, the distributed operating system first has to perform composition and only then again resource partitioning and isolation. But if you fail at composition due to changes in infrastructure and business rules, you will never even get to consume your precious and expensive scale-out compute resources.

When we first spoke about our vision of Intent-Based Networking, one aspect about IBN mentioned by analysts was orchestration, but there is much more than that. Orchestration is an execution workflow and is less concerned with the state. So in that model, it is assumed that someone else creates the single source of truth about the state of infrastructure and business policies; and that this state is then fed into the orchestration workflow. In IBN, that single source of truth, originating from intent, (which we define more precisely below) is at the core of the platform, and drives everything else.


So what is the intent? At the highest level, intent is a declarative specification of the desired outcome. And the desired outcome is complete automation of the whole network service lifecycle, which consists of the following phases: Design, build, deploy, validate.

At a high level, Intent defines the “what” not the “how”. A key observation is that intent is dynamic, and a fundamental requirement of an IBN system is that it should be capable of ensuring that intent’s expectations are met in the presence of change. Those changes can come from either the operator (business rule change) or the infrastructure (operational status change).

In order to enforce intent expectations are met, the IBN system has to be the single source of truth (regarding the intended state of both your infrastructure and your business rules) that one can programmatically reason about in the presence of change.

In the absence of this you will be spending most of your time immersed in accidental complexity developing a coordination layer that synchronises a growing number of sources of truth that come with different formats and semantics.

One can argue that “everything can be done in software” and so can the reasoning logic described above. In a worst case example, one could envision writing a script that consolidates a set of all sources of truth scattered across various files and databases and then reason about the resulting output. Beyond simple automation tasks, the complexity of the system will quickly become unmanageable. Add the requirement for intent to be dynamic, and this solution becomes anything but.

Reasoning about intent programmatically is the key enabler for the automation of all aspects of the service lifecycle such as design, build (including resource allocation), semantic validation, configuration rendering, expectation generation, test execution, anomaly detection, troubleshooting, change request validation and refutation.

Reasoning about intent needs to be maintainable and testable in the presence of change. In order to achieve this, the IBN solution is required to have the following eight capabilities:

1. Ability to easily extend the schema of this single source of truth to address new business rules and infrastructure capabilities.

2. Ability to programmatically decompose the single source of truth into subsets of elements of interest as it grows in size and complexity. This decomposition is key to dealing with scaling issues — i.e. an architecture that results in every piece of logic reacting to every change in intent will not scale.

3. Ability to get notified reactively about the nature of a change (addition, update, deletion) in the intent. This asynchronous, reactive capability (as opposed to polling) is another key to addressing scaling issues as intent gets more complicated.

4. Ability for components to communicate in reaction to a change in intent.

5. Ability for network operators to insert their expertise by enabling them to insert their own logic and programmatically reason about the intent, all in the presence of change.

6. Ability to add support for new innovative features offered by modern infrastructure platforms.

7. Ability to add support for a collection of new telemetry data.

8. Ability to launch Intent-Based Analytics to extract knowledge out of raw telemetry data.

We recommend that you use the above checklist for validating IBN compliance and when evaluating IBN solutions.

In summary, the intent definition language (allowing you to define that single source of truth) AND reasoning about intent has to be built into the IBN platform.

Why does “built into the IBN platform” matter? Because it means less code (bugs) to write, review, and maintain, less tests to write, review, and maintain. In short, more agility and availability. In its absence you can expect the complexity of your solution to spiral out of control in the presence of change.

Lenovo DCG moves Knight into A/NZ general manager role
Knight will now relocate to Sydney where he will be tasked with managing and growing the company’s data centre business across A/NZ.
The key to financial institutions’ path to digital dominance
By 2020, about 1.7 megabytes a second of new information will be created for every human being on the planet.
Is Supermicro innocent? 3rd party test finds no malicious hardware
One of the larger scandals within IT circles took place this year with Bloomberg firing shots at Supermicro - now Supermicro is firing back.
Record revenues from servers selling like hot cakes
The relentless demand for data has resulted in another robust quarter for the global server market with impressive growth.
Opinion: Critical data centre operations is just like F1
Schneider's David Gentry believes critical data centre operations share many parallels to a formula 1 race car team.
MulteFire announces industrial IoT network specification
The specification aims to deliver robust wireless network capabilities for Industrial IoT and enterprises.
Google Cloud, Palo Alto Networks extend partnership
Google Cloud and Palo Alto Networks have extended their partnership to include more security features and customer support for all major public clouds.
DigiCert conquers Google's distrust of Symantec certs
“This could have been an extremely disruptive event to online commerce," comments DigiCert CEO John Merrill.