Multi, hybrid and distributed cloud pros and cons - and taking an API-first approach

As their cloud deployment matures, many organisations are moving beyond single cloud use and are choosing to adopt distributed, multicloud or hybrid cloud approaches. By Olaf van Gorp, Perforce Software.

  • 3 years ago Posted in

This helps them make the most of each type’s advantages, but what is critical to success is maintaining flexibility, control and security regardless of the cloud environment. Creating a more API-centric strategy can make a major contribution.

Understanding different categories of cloud is a good starting point. Multicloud refers to an approach where there are multiple cloud providers, instead of being limited to just one. For example, a company might have a blend of Microsoft Azure and Amazon Web Services. 81% of public cloud users are adopting this approach, according to Gartner, and it’s a good option for businesses that need cloud independence.

Multicloud adoption avoids vendor lock-in or dependency, and improves disaster recovery by replicating workloads across different cloud providers. It could also reduce costs by optimising cloud deployment based on choosing cloud options that provide the lowest cost depending on each situation. On the downside, the administrative burden can be onerous and security may be more challenging to manage compared to using a single cloud. Estimating costs according to usage is potentially complex, and some of the proprietary features of each cloud lost due to needing to keep workloads portable across different environments.

Hybrid cloud and distributed cloud

Hybrid cloud involves a combination of both public and private clouds, but they could all be from the same provider. It is a good option for organisations with regulatory concerns, or simply have certain content that needs to be held privately in their own data centres, while having other content that is fine to store in public clouds. Like multicloud, disaster recovery is improved by replicating workloads, but with some added benefits, such as more flexibility to scale up or down according to demand, thus maximising availability. Hardware configurations not in the cloud can be supported (while still minimising unnecessary on-premises equipment), and latency decreased. On the other hand, disadvantages include: increased complexity of maintaining a private cloud and a higher administrative effort, bottlenecks as workloads move between clouds, security challenges, plus less visibility of resources once they are separated into public and private clouds.

In a distributed cloud environment, a public cloud infrastructure can be run in multiple locations: on the cloud provider’s infrastructure, on-premises, even in other cloud providers’ data centres, while all is managed from a single point of control. It has been said that distributed cloud can — in theory — overcome some of the management and operational challenges of both hybrid and multicloud. It also supports edge computing, where servers and applications are brought closer to where the data is being created. That can assist with local legislation, while enabling flexibility to use other cloud options in other regions. Distributed cloud is relatively new, so its market-proven pros and cons are still emerging.

The role of API management in modern cloud deployments

Regardless of the selected cloud strategy, an ‘API first’ approach will help organisations keep control of different environments. As part of that, API gateways are increasingly used to create a layer that decouples client applications from the complexity of the cloud landscape, while also securing system access, providing a unified interface, and simplifying management.

Many business APIs are already deployed using API gateways, which accept API calls and aggregate the services required to respond and return the appropriate result. Extending the role of API gateways into modern cloud environments is a logical continuation of the single-point advantage.

Cloud service providers do, of course, offer their own API gateway-style services, but choosing a separate gateway provides independence across all providers. Their gateway services can still be used, but will perform as proxies within their own clouds. Having a separate dedicated API gateway layer provides greater flexibility across regions, network and deployment zones, while still maintaining centralised management. From that position, components can be more easily provisioned and decommissioned. Security configuration, reporting, and other processes and policies can be uniform regardless of the cloud environment, location or type.

In practice, this means that irrespective of where resources are being hosted, client applications can interact with business APIs seamlessly. If those cloud environments shift in any way, that has no impact on the client app. Both management and reporting across multiple clouds can be centralised. The API gateway sits between the client and target landscape, regardless of on which cloud/edge the target service is situated.

In a multicloud situation, client apps can automatically be directed to the API gateway closest to them, or one that is judged to deliver the highest performance (such as capacity) at that moment in time. Similarly, in a hybrid cloud set-up, API gateways can be deployed both in the public cloud and on-premises, again with centralised management and reporting.

Security can also be enhanced, because API security can be implemented in — and delegated to — the API gateway, thus ensuring a consistent approach. Regardless of where the target is situated, the API gateway will only allow access to those services for authentic, authorised clients that submit valid requests (within their SLA, if applicable).

WITHOUT API gateways — in other words, without a well-defined and centrally managed API interaction layer, how do you get a proper overview of operations across clouds? How can you consistently apply API security and report on it in an aggregated, consolidated manner? How can you seamlessly point to one target or another without interrupting traffic / affecting the client?

As cloud deployments evolve and mature, the need to be fluid and scalable, while keeping on top of management and security, will only increase. So, putting in place a solid strategy with APIs as a unifying level will help organisations keep their cloud operations robust and flexible.

By Martin Hosken, Field CTO, Cloud Providers, Broadcom.
By Jake Madders, Co-founder and Director at Hyve Managed Hosting.
By Apurva Kadakia, Global Head for Cloud, Hexaware.
By Terry Storrar, Managing Director at Leaseweb UK.
By Cary Wright, VP of Product Management, Endace.