As more and more Internet web- and cloud-based software started to appear, software developers were put to a distinctive challenge — how to make their digital services easy to communicate with and be development-friendly to other software platforms and frameworks?
The distributed nature of a cloud software requires intensive communication between applications that are scattered on different hosts (servers) and are interconnected by the TCP/IP and HTTP protocols of the world network. This communication is essential for all websites, web portals and mobile apps which send (submit) and receive (fetch) data to and from the cloud/web services. And while the media content (text or binary) of this communication is not restricted by the TCP/IP protocol, the widely-adopted HTTP subjected this communication primarily to text (hex ASCII, encoded byte-stream). This text is called
payload. It is the data which is exchanged back-and-forth - conveniently transported over the Internet's network infrastructure. A key aspect of the application's inter-communication is the
address, or more specifically the receiver identification - a concept well defined with the
IP address:port pairing and further developed as
With the Internet’s ability to convey data to a specified application the only real challenge the software developers are facing is the proper definition of messages (and their properly formatted responses) that the cloud services would accept over the HTTP protocol. There are several ways to formalise the message definitions and most of them are centred around a well-defined
markup data-formatting specifications such as XML, JSON, and YAML. Of course, the usage of the HTTP payload is not limited to those, but keeping the message definitions in a human-readable format is a tremendous help for any web page and/or mobile app developer. The communication to a cloud service starts to resemble close-to-natural language conversation and it is very straightforward to analyse and troubleshoot.
One really good solution is the combination of the HTTP’s protocol
GET/POST/DELETE functions (defined in RFCs), along with a model of an
object (or resource) to be used with those functions. With the defined attributes for those
objects in a JSON format, we can devise several entry-point URLs where these resources can be obtained, modified or deleted. This is, in essence, the way the REST(ful) API approach is tackling a variety of services that real-world business cases can offer via their digital systems. The examples can be countless: getting images from online file storage (the image is the
object); fetching and updating purchase details in online stores; sending credit card payment transactions to a payment portal; submitting debt claims to a digital collection agency, etc. The underlying concept remains the same though - there is a number of accessible digitalised resources and these can be obtained and modified via API as long as our webpage or app can "speak" with proper well-defined messages.
If you are a web or a cloud software developer such implementation tasks become quite repetitive. You are always going to build the same underlying application infrastructure: message payload definitions, client request handling, end-point URL management, JSON API processing, asynchronous event notifications, and so on. It is only natural to think if all this can be automated to such an extent that developers can only implement the business logic and do not worry every time about the API infrastructural support systems. This is the approach that Swagger (a popular framework for APIs) took when they defined their OpenAPI specification and implemented a good range of development tools to support it. With version 3.0 the specification framework had matured and is already getting integrated by some of the bigger software companies, like Microsoft’s ASP.NET core, Google API initiatives, Adobe’s online services.
Specification and framework
The OpenAPI specification is mainly centred on descriptions of paths (endpoints/URLs), operations (GET, UPDATE) and objects (data) that are forming the API conversation. All these definitions are typed in JSON or YAML as a convenient storage for processing by OpenAPI development tools. The programmer starts with the top-level endpoints defining a list of allowed actions (operations) and the URL parameters available for these operations. Each operation would return a response (payload), which is usually in some web-friendly format such as JSON. The response media though is not limited — it could be anything. The specification is very flexible and many different responses for a single operation can be described. In the response body definition, we can specify a customised object type (even sets of different object types depending on the operation completion state).
The two typical aspects of an API — the links and the callbacks are quite well established in the OpenAPI specification. A link is basically an instruction for the API tools how the response from one operation can be used as an input parameter to another operation. This is a feedback mechanism called operation traverse. The callbacks are asynchronous HTTP requests that the web service can send to other (consumer, third-party) web applications. This call is not strictly a part of an operation — it can be a notification for some event of interest that happened with the consumer’s data.
When dealing with sensitive data — the security and especially the authentication is also a major factor. OpenAPI allows the developer to choose from a variety of well-adopted authentication models, such as HTTP/Auth, API-keys, OAuth 2, OpenID.
Importance of tools
No matter how good a specification is — it is as good as the tools and the development framework which are to make use of it. It is where the Swagger team made a very good effort to provide the two most important tools: a definition editor and a code generator.
The editor is the tool mostly used in an early API definition until all endpoints and operations are crafted out. A good visual editing is quintessential to good and structured design (following an established engineering approach — what you can see you can improve).
Code-generation tools can help the developer to create code fragments (or entire stubs). This code is part of the server API processing and also part of the client’s application engaged in the API protocol exchange. With the help of the code generator, we can completely avoid dealing with the inner nitty-gritty details of the API transaction handling, parameter validation, and proper response composition.
The tool availability is not limited only to the originators of OpenAPI (Swagger, https://swagger.io/docs/open-source-tools). Many independent developers are posting projects in GitHub related to OpenAPI utilities, eventually building a critical mass of proper and very functional frameworks for all web and cloud development environments (PHP, Python, Node.js, Perl, AngularJS, React, etc.).
Wide adoption and future
Every new technology is as good as the problems it can solve. The challenge to define a proper and structured API exchange application protocol is a significant part of the cloud- and web-based server applications, as well as their consumer counterparts — websites, mobile apps, and other cloud services.
There are several major factors to review if we can consider, that the OpenAPI has the potential to emerge as one of the preferred ways to do API programming:
- Initial abstraction of the paradigm — or how well the originators envisioned the future problems their framework can solve
- Openness to other well-established technologies and standards — like the JSON, HTTP and authentication models used by the specification
- Tooling and utility development — how much of the developer’s community would engage in creating more and more facilitation for the specification to live on
- Vision and evolution factors — it is hard to measure the impact of those, especially when sometimes it is a matter of personal programmer’s opinion if some specification/framework is good or not, but it is a fact that some software technologies live longer, while other quickly fade-out
- Support and community — it is not possible for a specification to survive if it is not extensively used in many software systems, especially ones created by big teams and maintained for long periods of time
So far the OpenAPI has “survived” its 3.0 version and it seems that the good days are still ahead.
Our eCollect development team is using the framework quite extensively pushing all the definition-limits and corner cases, and so far the OpenAPI is passing the test with flying colours.