The Missing Pieces of Design First Workflows

Flowchart example

Design First is a practice that encourages us to design any API change before writing the code code. The goal is to do the upfront work to produce better API changes. Design First is normally done by modifying an OpenAPI document first and then using that change as guidance on what to develop.

But Design First is a small piece to the larger API development process. There are more practices and patterns we need in order to make design a cornerstone to building good APIs.

When taken alone, Design First seems to imply that the process is linear—first there's the design, then development, then deployment. But design doesn't work this way. Design is something we're always doing, even when writing code.

What then are the practices we need to add into our Design First workflows that makes Design First work?

Why We Are Writing SDKs

I’m new around here, and my first post is about something I really love… code that makes life easier!

To begin with Optic's CLI is ridiculously easy to start using within your application, you simply head to the root of your directory and run api init, configure your .optic.yaml to wrap your application start command, and tada you’ve started gathering data about your API endpoints by just using them. Super simple and great right? To look at the full process and implement it yourself, please check out our documentation.

But what happens when you can’t do that? Plenty of instances exist where it’s very complex to work in a team and wrap the command that starts your application. What happens when you use feature flags on that command, or you have to make operating system specific changes, or your API testing is done entirely internally and doesn't hit the network?

How to Discuss API Changes During Code Review

Code reviews are an essential part of any high-quality software development process. A thorough code review can prevent bugs and regressions from slipping into production, improve code quality and consistency, and ensure your team knows about important changes before they go live.

While looking at code can help you find flaws in your logic, it's notoriously hard to trace changes through your application to see how they might actually affect your API.

It’s OK to Break Your API

It's OK to break your API

Great APIs can integrate with anything, but in practice they don’t integrate with everything. Yet, much of the industry approaches their APIs like a fragile, priceless vase: not to be altered for fear of ruining their value and perhaps breaking it.

Simultaneously, these APIs are integrated with a growing number of real-world applications and systems. APIs connect the Internet to smartphones, sensors, and cars. They allow systems to communicate with each other and businesses to quickly add new features to applications. We want these things to work and they must be able to evolve.

Most organizations today produce more private and internal APIs than public APIs. So, the idea that an API must work for everyone who may use it has become less relevant over time. However, if you build an API, it must still work well for users. And while most APIs have a small number of consumers, changes to an API can lead to startling impacts in the real world.

You can avoid these unexpected consequences when you change your API. All it takes is effective communication and building strong relationships between you and the consumers of your API.

API Testing: Methods and Best Practices

A laboratory worker is running API tests with test tubes

Application Programming Interface (API) testing is more complicated than testing a single-host application. Single-host applications can often be covered by unit and integration tests, but an API functions only if there is a both requester and a respondent. To fully test your API, you have to think about testing the application from both of these perspectives.

Testing the API from the perspective of the requester means testing authentication, data accuracy, data formatting, overall consistency, and relevance of available documentation. The respondent needs unit tests for the components it uses and the data transformations it makes. It also needs to ensure it can both handle properly formulated requests and reject requests that are malformed or maliciously formed. You also need to test the connection to the data layer to ensure that whatever source the API draws from is accessible and understandable. This article will go into more detail about the tools and best methods for effectively testing your API.

Writing Design Guides for API Changes

Optic changelog in GitHub

I've worked on two different products that provided API design guide tooling. The idea for both was the same—provide a tool that helps companies design consistent APIs and helps communicate good design patterns to all of their developers.

It's common for a company to take inventory of their APIs and see lots of inconsistency. One API uses camel case while another uses snake case. One puts the version in the URL, another doesn't. These companies try to address this by putting together their own set of API guidelines and adopting tools that help them apply these guidelines to their APIs during the development process. This is where our design guide tooling came in.

Design guide tooling—which goes by names like style guide, standardization, governance, and linting—allows people to express guidelines in code and apply those guidelines to an OpenAPI document. Imagine a rule that makes sure every API endpoint requires authentication and returns a 403 when not provided. When a rule like this fails, the tools can prevent the teams from merging the code until they resolve the issue. It’s a big win for scaling an API strategy across many teams.

But there's always been a nagging feeling something is missing.

When Real Traffic Smashes into your API

Beta Alert:

Starting today, you can now run Optic in Real API Environments. Sign up here!

Data plotted from the Large Hadron Collider

Once our APIs are deployed in a real environment like staging or production, traffic races in from all directions, interacting with the API in the ways we expect, and in ways we could never anticipate. These 'surprises' can teach us things and help us build better designed and more robust APIs.

Physicists smash particles into one another in giant particle accelerators looking for surprising patterns, and for new information that questions their assumptions about reality.

When we put our APIs in the path of real traffic, reality smashes into our API specifications, our well-landscaped-happy-paths, and our assumptions. We learn, revise, and improve our work -- hopefully.

There is so much to be learned from looking at our API's actual behavior and usage in the real-world. But are we really looking? Can we tell if our APIs are working as expected from the logs we collect today?

Users have long asked for options to run Optic in real environments to get more observability into their API's real behavior. We have even observed teams using the local-cli in their real environments without official tooling or support from the main project. Now you can join the beta and start monitoring your deployed APIs with Optic. You can also check out the docs to learn more about how the integrations work.

Optic contract testing diagram

Integrating Optic with IntelliJ

Optic ensures every change to your APIs gets documented, reviewed and approved before it is merged. By observing traffic that is sent to your API, Optic informs you of any changes it sees in behavior, allowing you to keep the specification up to date. If you run IntelliJ IDEA, you can integrate Optic into your workflow and always catch the latest changes as you work. It runs alongside your existing project configuration, in any run mode including debugging.

The Git for APIs

To everyone who has tried Optic, contributed code, told us your stories, written about the project, and helped us chart our course — we (the maintainers) want to say thank you! We know it’s been a crazy year and API tools are not the most important happening in the world right now. Still you’ve made the time for our open source project, and in doing so, helped keep us all employed. We can’t thank you enough for giving our team the opportunity to work on these problems. Thank you, sincerely.

Let’s Talk about API Coverage

Today I’m really excited to show off a new ‘primitive’ for API tooling that Optic has been working on. Everyone, meet spec coverage.

API spec coverage measures how much of your API spec was covered by traffic, and like everything else we do, it’s open source!

The Optic CLI now produces a coverage report for any of your test scripts, Postman collections, or even a bunch of manual CURLs to a server.