Building APIs for Low-Code Automation Tools
Low-code automation platforms (Salesforce Flow, Microsoft Power Apps, Zapier, Workato, etc.) are not just a trend but a significant shift in the IT landscape. Empowering line-of-business users with minimal programming experience to self-service their automation needs without relying on the IT department is incredibly appealing to companies.
Gen AI further amplifies the importance of this platform by offering a promising future. It pledges to streamline the complexity of building these ‘citizen’ integrations to the bare minimum: Simply articulate the problem and let the bot orchestrate the solution.
It is also interesting to note how companies are building their gen AI capabilities on top of the low-code platform abstractions, like “actions” and connectors, instead of directly connecting the LLM models with the underlying APIs.
No matter who is designing the integration, user or bot, one of the main requirements for the tool’s adoption is consuming capabilities and data from the different systems in a company ecosystem. Most of the time, this means integrating the low-code tool with the APIs of the services in the company API landscape: external SaaS products and internal systems.
The integration with these types of systems presents a unique and challenging situation for API producers:
- The final consumers are non-developers.
- The tool acting as an API client requires a particular API contract with some advanced API features.
When IT departments start connecting their services to these tools, many times, the APIs are not ready to be consumed:
- The APIs are too complex, expose too many low-level details, and need more documentation that line-of-business users can comprehend.
- The API is too generic and lacks some of the specific features required by the tool, or if the features are present, they don’t match precisely the interface expected by the tool.
Someone has to pay the price of adopting those APIs to be integrated into the tool. In the case of commercial SaaS services like Salesforce or Jira, the tool vendors spend a lot of resources building a collection of “connectors” that can be offered to their customers out-of-the-box.
In the case of internal systems, the company adopting the low-code needs to take care of the work. The low-code platform providers offer different tools to build “custom connectors” that wrap up the APIs. Teams can use these tools to write client-side code to connect their APIs to the low-code platforms, but this means building and releasing another software artifact tied to the API lifecycle, and any capabilities exposed through the client-side connector are not available for other API consumers.
Another option is to push the problem to the API producer side so that once the capabilities are implemented in the service, the connector is generated automatically from the metadata of the API contract.
The tricky part of this approach is packaging the functionality in a set of resources and operations that can be directly exposed to line-of-business users. A possible approach is to build an “experience API” that offers a simplified high-level version of one or more APIs and can be reused by multiple lo-code use cases.
The other part of the equation is how to expose the consistent API contract that these tools require. If we look at this problem from the capabilities point of view, all the tools are remarkably similar, as the following table showcases:
From the previous table, we can identify what features must be taken into consideration when building APIs that are going to be exposed in low-code automation platforms:
Resources
Most tools still group actions by the common “nouns” or “objects” they relate to. Approaching API design from a RESTful point of view and then planning what actions should be enabled over these resources is still the best way to design APIs, even if the focus during the automation design is the set of actions that can be orchestrated.
Additionally, providing good names and descriptions for the resources is essential since they often end up in the tool’s UI. This can be challenging since the de-facto standard Open API Specification (OAS) only provides a standard way of describing resources other than the path name.
It is also very important to consider the “primary key” of each resource. It should be possible to map that key cleanly to the parameters of any GET operation and identify the properties in the schema of the representations returned from the service that map to that key. This is especially important for de-duplication of the data in the low-code platform.
Operations
Individual API operations are the main building blocks of the low-code tools. The final goal as an API producer integrating with the platform is to provide a robust set of high-level “actions” that can be orchestrated by line-of-business users. A helpful way of approaching this is by offering the standard set of methods in the HTTP uniform interface plus a set of custom verbs that can offer additional affordances beyond the standard interface. Supporting the uniform interface is still essential to provide basic CRUD capabilities over the resources in the API, even if the target low-code tool does not enforce any specific semantics for the actions.
Additional affordances can be added to the main resource in nested paths to provide other high-level, domain-specific operations. Depending on how they affect the resource’s state, these can be mapped to GET / PUT methods.
Understanding how to make the GET operations over collections queryable is also relevant to the platforms supporting it. This includes marking properties that can be filtered when passed as parameters or the parameters to sort the collection.
Finally, providing an operation version complements the API version is helpful since the integrations built in low-code tools are coupled with specific operations instead of the whole API. Versioning at the operation level makes it possible to introduce a better understanding of the impact of changes in the APIs being consumed.
Webhooks
The programming model of the low-code integration tools calls for the integrations to be activated many times when new data is available in the upstream service, or a particular event happens. These tools expect to receive event notifications via webhooks. As part of the API design, besides designing the set of operations over the resources in the API, API providers must also offer a set of webhooks providing standard notifications over collections of resources (resource created, resource deleted) and custom notifications relevant to domain-specific events.
Pagination
If the API cannot provide webhooks, most tools rely on client-side polling to trigger integrations based on changes in the dataset of resources offered by the upstream API. The service must implement a pagination protocol compatible with the tool’s requirements and a standard resource identifier for de-duplication. Cursor-based pagination is the most popular option. The exact mechanics change provider by provider.
Authentication and connection testing
Another important concept in low-code integration tools is the notion of “connections,” which are basically the set of credentials that users share to authenticate and access the actions provided by a connector.
OAuth2 is the most popular authentication mechanism for creating connections on SaaS APIs. If implemented, it can allow the line-of-business user to request access directly to the internal API. Using an API Key is also p. Still, the line-of-business users or the tool’s admin must be able to configure the credentials for the line-of-business users.
One caveat of this approach is that it is not possible to identify each integration built on the low-code tool as a specific client when applying runtime policies like rate-limiting. The client’s identity is tied to the mechanism to map service credentials to connections.
Another common requirement for API producers is to provide a “connection testing” endpoint to test the connection.
Dynamic values
In low-code integration tools, line-of-business users typically use a UI visual tool to provide the parameters of the actions they are orchestrating. In many cases, the tool needs to understand the potential values for the parameters of one action by invoking another operation in an API. For example, to identify one object that will be fetched, it might be necessary to provide a list of all the labels for the resources that can be fetched in a combo box. This requires linking the operation that includes the list of objects to the parameter for the identifier.
Open API Specification already provides a “link” element that can be used to connect operations in the API. The exact syntax with custom vendor extensions can be used to declare the operations, providing values for parameters in the operations.
AI extensions
Some tools leverage a layer of genAI to allow customers to create integrations from textual descriptions. The critical aspect of integrating with this layer is the textual descriptions for the operations, parameters, and fields. They must be as complete and exhaustive as possible.
Putting it all together
After modifying the internal API to support all these features, the low-code tool still needs to be connected to the API.
Most platforms allow uploading an Open API Specification that automatically generates a connector. Unfortunately, a standard OAS document does not define enough metadata to describe the contract we discussed. It can only create a very basic connector that must then be configured in the tool, mapping the tool constructs, like a trigger, to the correct API call over the API.
Some providers have defined OAS vendor extensions that can describe the API’s capabilities as additional metadata. This way, a more powerful connector can be generated straight from the API specification without extra configuration or code. Lacking platform-defined extensions, a similar result could be achieved with internally defined extensions if the provider supports some kind of connector SDK tooling that can be used to generate the connector from the internal annotations.
In any case, investing in this platform will introduce a significant challenge for API producers. If many internal APIs must be made available for line-of-business users, the company’s API program team must develop a well-defined strategy. On the other hand, it can increase the consistency of a company’s internal APIs and introduce API governance tools that can assist API producers in adapting their APIs.
Finally, the use case is well-defined enough to be worth trying to achieve some kind of standardization across vendors. Ideally, we could agree on a common metadata format for “connectivity” that will make APIs, internal, but especially SaaS APIs, available on the different platforms. The Open API “Workflow” working group could be ideal for working on a standard solution.