DevOps Practice

7 Principles for Using Microservices to Build an API That Lasts

 

SparkPost launched the first beta version of our cloud-based email delivery service three years ago. At its introduction, a handful of customers sent a few million emails a month. Now, our API is used by tens of thousands of customers—including Pinterest, Zillow, and Intercom—to send more than 15 billion emails a month. That dramatic growth demonstrates how rapidly our business pivoted from providing on-premises email infrastructure to operating as a fully cloud-based email delivery service.

It’s a great business story. But how did we manage change at that scale with regard to technology and development management? In this article, I’ll review several of the choices and best practices that have helped us to not simply manage this pace of change, but actually to thrive from it.

REST Is Best: Be Practical, Not Pedantic

From the start, we took an “API first” approach. Our email API is our core application, not an afterthought.

The first, and most important, step we took was deciding to use REST for our API. Our philosophy was to choose the following three elements as our API’s foundation:

  1. HTTP: This covers response codes as well as operators. The operators include POST, GET, PUT and DELETE, and they can be mapped to the basic CRUD (create, read, update, delete) operations.
  2. Resources: These are the entities that the HTTP operators act on.
  3. JSON (JavaScript Object Notation): This is a data-interchange format.

Those three elements provide everything needed for a practical REST API, including simplicity, portability, interoperability and modifiability. After the API is built, users can easily integrate against it regardless of their programming language, including C#, PHP, Node.js, Java or even cURL in a shell. They can do so without worrying about the underlying technology, including its use of multiple microservices.

When we created the SparkPost API, we tried not to be too pedantic about following a pure REST model, opting for ease of use instead. Here are two examples that may not follow RESTful best practices:

  1. GET /api/v1/account?include=usage
  2. POST/api/v1/sending-domains/example.domain.com/verify

The first example uses a query string parameters for GET to filter what comes back in an entity. In the second example, we use the action word “verify” in the endpoint name, which may not be RESTful. We discuss each new use case and do our best to ensure it’s consistent and easy to use.


Related Content:

APIs: Automation is Great. So is Usability

Tips to Optimize the Cloud for Scaling


Evolving and Managing Change

We have many developers and teams working on our API’s microservices, with changes delivered on a continuous basis. We automate deployment of a change to production when an engineer (and a second) concludes it has passed our tests. It’s “released” when the product team decides we’re ready to tell customers about the change.

We decided early on to keep our API consistent in its use of conventions and how changes are managed. We established a governance group that includes engineers representing each team, a member of the product management group, and our CTO. This group establishes and enforces our API conventions, which are thoroughly documented.

Documenting our conventions reduces inconsistencies and makes it easier to define each new endpoint. Here are a few conventions we’ve established:

  • URL paths are lowercase with hyphens when separating words, and are case sensitive.
  • URL query parameters and JSON fields are also lowercase with underscores, and are case sensitive.
  • Unexpected query parameters and JSON fields in the request body should be ignored.

Our governance group also sets the ground rules for how changes can be made and what types of changes are allowed. There are a number of good API changes that are beneficial to users and don’t break their integrations, including:

  • A new API resource, endpoint, or operation on an existing resource
  • A new optional parameter or JSON key
  • A new key returned in the JSON response body

Conversely, a breaking change includes anything that could break a user’s integration, such as:

  • Changing a field’s data type
  • A new required parameter or JSON Key
  • Removal of an existing endpoint or request method
  • A materially different behavior of an existing resource method, such as changing the default for an option from “false” to “true”

Those kinds of changes will either break users’ integrations or require the addition of a new version, which introduces more overhead.

Don’t Break Bad When Making Changes – Nearly All the Time

Breaking changes should be avoided, even if they’re the result of fixing bugs or inconsistencies. It’s usually better to work around such idiosyncrasies rather than risk breaking customers’ integrations. If a change is of the breaking variety, we proceed with extreme caution and seek out alternative ways to achieve our goal. Sometimes that can be accomplished by simply allowing the user to change their behavior through an account setting or an API parameter.

However, there are times when the benefits to our users outweigh any potential negatives that would be introduced by a change. In those cases, though, we followed these best practices:

  • We received buy-in from our product, support and developer relations teams.
  • We analyzed API logs to see how many users the change might affect.
  • We gave users at least 30 to 60 days of advance warning about the change.
  • We sent an email or published a blog post containing explicit details of the change and why we were making it.
  • We provided guidance in the API documentation.

One Version to Rule Them All

After making thousands of changes to our API during the past three years, we’re still on the first version. We decided early on not to version our API beyond the first one because doing so adds a level of unnecessary complexity that can slow down user adoption of our latest and greatest functionality. Versioning an API can also slow down development and testing, complicate monitoring and confuse user documentation.

In addition, not versioning our API means we can avoid the controversy that tends to swirl around the subject. There are three ways to version an API, and all of them come with potential pitfalls:

  • Put the version in the URL: Easy to do but a bad choice from a semantic perspective because the entity doesn’t change between v1 and v2.
  • Add a custom header: Also easy to do but not semantically correct.
  • Put the version in the accept header: More semantically correct but the most complicated approach.

Use Client Libraries to Help Non-JavaScript Users

Some of our users prefer Python, C#, Java or PHP over JavaScript. We make it quick and easy for them to integrate our API into their applications by maintaining client libraries that offer an easy-to-use set of functions for their code.

Our client libraries have changed over time, and we do version them. We’ve learned that abstraction is hard when wrapping a living, growing API, so we focus on providing a thin layer of abstraction with some syntactic shortcuts that simplify the more complex areas of our API. Doing so lets our users quickly hit any of our API endpoints quickly and with a lot of flexibility—it also lets us “future proof” our API to some extent.

A ‘Documentation First’ Strategy, Too

We treat our documentation as code and use it to document our API changes before we write or change a single line of API code. Doing so helps us enforce our conventions, keeps everything consistent and maintains a good customer experience. It also cuts down on support costs.

We maintain our documentation in GitHub, which makes it easy for technical and non-technical users to contribute changes. We’ve also found that it’s easier to review changes that way. We use API Blueprint Markdown format and Jekyll to generate the HTML docs, along with a nice search service called Algolia. Doing so lets us have full control over the customer experience, including mobile.

For those who don’t want to “roll their own” documentation, we recommend OpenAPI (previously known as Swagger), Apiary and API Blueprint. It’s important to avoid a tool that isn’t well-suited for REST API documentation. We suggest including a bright orange “Run in Postman” button in the documentation so it’s easy to try an API, along with good examples of success and failure scenarios.

Listen to Users

Finally, we recommend that all developers pay attention to what their users have to say. SparkPost has a community Slack channel where thousands of users can easily access members of our product, support, engineering and executive management teams. We also have a dedicated Developer Relations team that’s squarely focused on engaging with the developer community. All of that allows us to listen to our users and incorporate their feedback into our API.

Chris McFadden

Chris McFadden

As the Vice President of Engineering and Operations with more than 16 years of software and technology experience, Chris McFadden is responsible for development and technical operations of the SparkPost cloud email delivery service, as well as development of the Momentum on-premises MTA. The Engineering team collaborates with Product, Marketing, Support, and Sales to deliver the most advanced email infrastructure available by having a deep understanding of email and customer needs, continuous delivery processes, disciplined engineering and DevOps practices, and an innovative mindset with a focus on excellent customer experiences.

Recent Posts

IBM Confirms: It’s Buying HashiCorp

Everyone knew HashiCorp was attempting to find a buyer. Few suspected it would be IBM.

8 hours ago

Embrace Adds Support for OpenTelemetry to Instrument Mobile Applications

Embrace revealed today it is adding support for open source OpenTelemetry agent software to its software development kits (SDKs) that…

16 hours ago

Paying Your Dues

TANSTAAFL, ya know?

18 hours ago

AIOps Success Requires Synthetic Internet Telemetry Data

The data used to train AI models needs to reflect the production environments where applications are deployed.

2 days ago

Five Great DevOps Jobs Opportunities

Looking for a DevOps job? Look at these openings at NBC Universal, BAE, UBS, and other companies with three-letter abbreviations.

3 days ago

Tricentis Taps Generative AI to Automate Application Testing

Tricentis is adding AI assistants to make it simpler for DevOps teams to create tests.

4 days ago