DevOps in the Cloud

Cloud Orchestration Language Roundup: A Comparison

What cloud providers do you work with? Google Cloud Platform (GCP)? Amazon AWS? Microsoft Azure? Which services? Are you using a managed Kubernetes such as GKE, AKS, Openshift or EKS? How do you manage your cloud resources? How do you manage the release cycle of the infrastructure and applications that you deploy?

If the above questions are the basic introduction to describe your work, then you are probably also familiar with the console, API or both of a major cloud provider. You probably know how to use a configuration management tool such as Ansible or Chef, Terraform or CloudFormation, or some other similar tool. Additionally, you are familiar with the languages of one of these tools. You probably have opinions on those languages, which inform your attraction to the tools you choose.

Kubernetes and Ansible use YAML. CloudFormation and Azure ARM can be JSON or YAML. Chef and Terraform have their own languages. (To be precise, Terraform can also be expressed in JSON.)

A comparison of cloud orchestration languages

Whenever you start learning a new tool, it is helpful to already know the tool’s language. The strength of using a standard language is that new users will feel more at home learning to use a language if they already understand the basics of its syntax. In the same way that many English speakers will relate to cognates in French, German or Latin than in Greek or Sanskrit without using transliteration, developers that know YAML will find it easier to learn Ansible than a tool that relies on XML. However, just because you know the language doesn’t mean you know the DSL. A DSL might use a language in a novel or even strange way that contradicts what you believe is proper.

A number of variables may come into play when a user develops an opinion of a tool. The most important is, “How simple is it for me to create something that does what I want and minimizes frustration later on?” The next often are, “Which I venture is dependent on the first, are apparent scientific and analytical reflections? Does it lend itself to convoluted data structures? Do efforts to remain readable flounder because of inefficient use of space or punctuation?”

Some projects, such as HashiCorp’s (Terraform), abandon a standard language to avoid such issues and instead create their own languages. The main drawback here is that it is more difficult to integrate the tool with existing projects and software libraries (although, providing for input and output in standard languages such as JSON can alleviate such problems).

Cloudify has a DSL, which is based on YAML. Cloudify uses YAML, because it is easier to read than JSON, but powerful enough to define new types when necessary.

Whether using a general language such as YAML or a proprietary one, certain designations are necessary: the type of resource, the name of the resource and often even whether the line refers to a resource, a parameter or a dependency.

To illustrate the differences between the languages, and associated products that have been mentioned, let’s take a look at a few examples. Let’s use a relatively simple example, an AWS VPC, named “myVPC.” This is a simple resource to define in any language because it has relatively few parameters and no dependencies aside from an AWS account.

Terraform:

 

  resource “aws_vpc” “myVPC” {

    cidr_block       = “10.0.0.0/16”

    instance_tenancy = “dedicated”

    tags = {

      foo = “bar”

    }

  }

 

Terraform, or HCL, is notable for two reasons. First, the HCL “block” syntax recalls method definition syntax in a number of familiar programming languages. This is a nod to how Terraform, among all of these tools, relates to packaging infrastructure definition as code. Second, Terraform is the most concise. It only takes seven lines.

Also, Terraform syntax infers significance from string position through the use of identifiers, such as “resource,” “aws_vpc” and “main.” HCL knows that block type comes first, followed by type and then name. For readability, this may be an ideal and certainly elegant method for defining components. Inside of the block definition is the API payload accepted by AWS EC2 Service for the creation of a VPC.

One drawback of HCL is the punctuation requirements. Braces are required to enclose blocks as well as dictionaries. Arrays require square brackets.

Let’s compare HCL/Terraform to CloudFormation:

 

  myVPC:

    Type: AWS::EC2::VPC

    Properties:

      CidrBlock: 10.0.0.0/16

      InstanceTenancy: dedicated

      Tags:

       – Key: foo

         Value: bar

 

CloudFormation utilitizes YAML as well as JSON. These are data serialization languages. So, we are looking at our resource definition, less as code and more as information. This results in greater verbosity and decreased readability. Less concise, CloudFormation is eight lines.

Whereas HCL uses “blocks” for resources. CloudFormation templates just call them “resources.” Instead of positional arguments, we have keys and values. “Type” defines the type. Properties introduce the resource API payload accepted by AWS EC2 service. However, YAML reduces the need for punctuation. Dictionaries are introduced by indentation and lists with hyphens. This can result in improved readability, and if someone really likes to use braces and brackets, no one is forcing them not to. However, it increases the number of lines and characters needed, because even abstract sections of the DSL require keys for introduction. This is because YAML accepts JSON.

Ansible also has a module for creating a VPC in AWS.

 

  – name: create VPC.

    ec2_vpc_net:

      name: myVPC

      cidr_block: 10.10.0.0/16

      region: us-east-1

      tags:

        foo: bar

      tenancy: dedicated

 

Ansible also uses YAML; however, there is no separation between API payload and other parameters. For example, authentication parameters are on the same level as payload parameters. Ansible uses modules for infrastructure management the way it uses modules for managing applications, which, while novel, is implemented in a clunky and relatively useless way. For example, you need a playbook for creation and a playbook for deletion.

Cloudify:

 

  myVPC:

    type: cloudify.nodes.aws.ec2.Vpc

    properties:

      resource_config:

        CidrBlock: 10.10.0.0/16

        InstanceTenancy: dedicated

        Tags:

          – Key: foo

          – Value: bar

 

Cloudify also uses YAML. It also works with key-value pairs. Cloudify is even more verbose than CloudFormation.

A node template can be a resource or a group of resources. In Cloudify, there are three types of properties. The resource_config, which contains the API Payload; the client_config, which contains the API Authentication; and orchestration properties,which are additional properties that enable you to describe how Cloudify will interact with the resource. For example, all resources have the use_external_resource, which means that Cloudify can use resources that Cloudify did not create.

Whereas Terraform/HCL uses “resource blocks” and CloudFormation uses “resources,” Cloudify uses “node templates.” Node templates can be a VPC or other cloud resource, or even a CloudFormation Template or a Terraform template.

Azure also sports an orchestrator, Azure ARM, and has its own DSL to describe Azure resources. This is how a virtual network is defined in Azure ARM:

 

  type: Microsoft.Network/virtualNetworks

  apiVersion: ‘2019-11-01’

  name: myVPC

  location: “[parameters(‘location’)]”

  properties:

    addressSpace:

      addressPrefixes:

      – 10.0.0.0/24

    subnets:

    – name: “[parameters(‘subnet_name’)]”

      properties:

        addressPrefix: 10.0.0.0/24

 

One notable difference in Azure ARM is that the resource entry in the YAML does not have a key name with the definition being the value of the name key; instead, the entire dictionary is a list item. This is perhaps more concise, but it may forfeit a modest amount of readability. Also, Azure is similar to Cloudify in that the resource definition “properties” is not on the same level as other more general details, such as “apiVersion” or “location.”

The syntax for accessing a parameter is also a tad obtuse. Accessing a key in a dictionary via square brackets and parentheses is probably one of the most unreadable methods in the DSLs we are looking at.

Like Cloud Formation, ARM can be written in JSON or YAML.

Terraform can define the same Azure resource using fewer lines.

 

  resource “myVPC” “example” {

    name                = “virtualNetwork1”

    location            = azurerm_resource_group.example.location

    resource_group_name = azurerm_resource_group.example.name

    address_space       = [“10.0.0.0/16”]

    subnet {

      name           = “subnet1”

      address_prefix = “10.0.1.0/24”

    }

  }

 

Cloudify will define the same resource in 14 lines. However, this could be shortened using default values.

 

  network:

    type: cloudify.azure.nodes.network.VirtualNetwork

    properties:

      resource_group_name: { get_input: resource_group_name }

      name: { get_input: network_name }

      location: { get_input: location }

      resource_config:

        addressSpace:

          addressPrefixes:

            – 10.10.0.0/16

        subnets:

          – name: subnet1

            address_prefix: 10.0.1.0/24

 

Azure and Amazon expose ARM and CloudFormation as API objects respectively. This enables orchestrators to manage those resource stacks as single resources.

You can define an ARM deployment in Terraform like this:

 

  resource “azurerm_template_deployment” “deployment” {

    name                = “myArmDeployment”

    resource_group_name = azurerm_resource_group.example.name

    template_body = file(“${path.module}/environment.json”)

  }

 

The type requires deployment name, resource group name and template body. The latter can be supplied inline or using Terraform’s file function.

Ansible can also define ARM templates:

 

  – name: Create Azure Deploy

    azure_rm_deployment:

      resource_group: myResourceGroup

      name: myDeployment

      template_link: ‘https://…/azuredeploy.json’

      parameters_link: ‘https://…/azuredeploy.parameters.json’

 

Cloudify can also orchestrate ARM deployments. Here is a simple infrastructure stack in ARM using Cloudify:

 

  deployment:

    type: cloudify.azure.Deployment

    properties:

      location: { get_input: location }

      name: { get_input: resource_group_name }

      template_file: ‘resources/arm/environment.json’

 

Cloudify can also accept a file or inline JSON (or YAML).

Having defined the source code for our infrastructure components, we can now connect other resources and processes to this infrastructure dependency. We can use the Terraform template to a CICD workflow or a networking application.

Using the same approach, Cloudify can also call Terraform stacks:

 

  infrastructure:

    type: cloudify.nodes.terraform.Module

    properties:

      resource_config:

        source: resources/terraform/template.zip

        variables:

          access_key: { get_secret: aws_access_key_id }

          secret_key: { get_secret: aws_secret_access_key }

          aws_region: { get_input: aws_region_name }

          aws_zone: { get_input: aws_zone_name }

          admin_user: { get_input: agent_user }

          admin_key_public: { get_input: public_key }

 

Cloudify also utilizes other Cloudify deployment stacks as service components:

 

  infrastructure:

    type: cloudify.nodes.Component

    properties:

      resource_config:

        blueprint:

          id: my_deployment

          blueprint_archive: { get_input: infra_archive }

          main_file_name: infra.yaml

        deployment:

          id: my_deployment

 

To summarize, there are many DSLs for orchestrating cloud resources. Each has its own focus: Terraform on simplicity and repeatability, CloudFormation on Amazon, ARM on Azure, Ansible on setting up applications and Cloudify on enabling users to automate any task, including automating Cloudify. We call this “orchestrating the orchestrator.”

Trammell Scruggs

Trammell leads the Ecosystem team at Cloudify. He designs and develops integrations for Cloudify, and manages the community examples. Trammell loves the exposure to real use cases and has fun working with them every day.

Recent Posts

AIOps Success Requires Synthetic Internet Telemetry Data

The data used to train AI models needs to reflect the production environments where applications are deployed.

14 hours ago

Five Great DevOps Jobs Opportunities

Looking for a DevOps job? Look at these openings at NBC Universal, BAE, UBS, and other companies with three-letter abbreviations.

24 hours ago

Tricentis Taps Generative AI to Automate Application Testing

Tricentis is adding AI assistants to make it simpler for DevOps teams to create tests.

3 days ago

Valkey is Rapidly Overtaking Redis

Redis is taking it in the chops, as both maintainers and customers move to the Valkey Redis fork.

4 days ago

GitLab Adds AI Chat Interface to Increase DevOps Productivity

GitLab Duo Chat is a natural language interface which helps generate code, create tests and access code summarizations.

4 days ago

The Role of AI in Securing Software and Data Supply Chains

Expect attacks on the open source software supply chain to accelerate, with attackers automating attacks in common open source software…

4 days ago