Docs‎ > ‎


Automation has value only in so far as there are no compromises in architecture (to integrate with existing systems), extensibility (to address elements not automated), and performance. This page provides details about how Live API Creator delivers enterprise-class performance.

Live API Creator delivers on all of the best-practice patterns. The relevant optimizations are re-visited on each logic change. In the same way a database management system (DBMS) optimizer revises retrieval plans to maintain high performance, Live API Creator performance remains at a high level over maintenance iterations.

For more information:

Minimize Client Latency

Modern applications may often be required to support clients connected through high-latency cloud-based connections. The following are designed to minimize client connection latency:

Rich Resource Objects

When retrieving objects for presentation, you can define Resources that include multiple types, such as a Customer with their Payments, Orders, and Items. These are delivered in a single response message, so that only single trip is required.

Note that this requirement is not fully satisfied by Views. Views are often not updatable, and joins result in cartesian products when joining multiple child tables for the same parent. In our example, a Customer with five Payments and 10 Orders would return 50 rows. This is unreasonable for the client to decode and present.

Leverage Relational Database Query Power

Each resource/sub-resource can be a full relational query that you can send in single trip to the REST (and then database) server. Contrast this to less powerful retrieval engines, where the client must compute common requirements such as sums and counts. This drives the number of queries up n-fold, which can effect performance.


Large result sets can effect on the client, network, server, and database. Pagination is provided to truncate large results, with provisions to retrieve remaining results, such as when the end user scrolls.

Pagination can be a complex problem. Consider a Resource of Customer, Orders and Items. If there are many Orders, pagination must occur at this level, with provision for including the Line Items on subsequent pagination requests.

Batched Updates

Network considerations apply to update as well as retrieval. Consider many rows retrieved into a client, followed by an update. APIs are designed to enable clients to send only the changes, instead of the entire set of objects. They are further designed to enable clients to send multiple row types (for example, an Order and Items) in a single message. This results in single, small update message.

Single Message Update/Refresh

Business logic consists not only of validations, but derivations. These derivations can often involve rows visible but not directly updated by the client. For example, saving an order might update the customer's balance. The updated balance must be reflected on the screen.

Clients typically solve this problem by re-retrieving the data. This is unfortunate in a number of ways. First, it's an extra client/server trip over a high latency network. And sometimes, it's difficult to program, for example when the order's key is system assigned. Live API Creator may know the computed key and may need to re-retrieve the entire rich result set.

Live API Creator solves this by returning the refresh information in the update response. The client can communicate a set of updates with a single message and use the response to show the computations on related data.

Server-enforced Integrity Minimizes Client Traffic

An infamous anti-pattern is to place business logic in the client. This does not ensure integrity (particularly when the clients are partners), and causes multiple client/server trips. For example, inserting a new Line Item may require business logic that updates the Order, the Customer, and the Product. If these are issued from the client, the result is four client/server trips when only one should be required.

Minimize DBMS Load

The Logic Engine minimizes the cost and number of SQL operations as described in the following sections.

Minimize Server/DB Latency

You can define the desired region for your API Creator. This minimizes latency for SQL operations issued by the API Server.

Update Logic Pruning eliminates SQLs

The Logic Engine prunes (eliminates) SQL operations where possible. For example:

    • Parent Reference Pruning. SQLs to access parent rows are averted if other other (local) expression values are unchanged. For example, if attribute-X is derived as attribute-Y * parent.attribute-1the retrieval for parent is eliminated if attribute-Y is not altered.
    • Cascade Pruning. If parent attributes are altered that are referenced by child logic, Live API Creator cascades the change to each child row. If the parent attribute is not altered, cascade overhead is pruned. In the same example above, the value of parent.attribute-1 is cascaded iff it is altered.

Update Adjustment Logic eliminates multi-level aggregate SQLs

The Logic Engine minimizes the cost of SQL operations. For example:
    • Adjustment. For persisted sum/count aggregates, Live API Creator makes a single-row update to adjust the parent based on the old/new values in the child. Aggregate queries can be particularly costly when they cascade. For example, the Customer's balance is the sum of the order Amount, which is the sum of each Order's Lineitem amounts.
    • Adjustment Pruning. Adjustment only occurs when the summed attribute changes, the foreign key changes, or the qualification condition changes. If none of these occur, parent access/chaining is averted.

Transaction Caching

Consider inserting an Order with multiple Line Items. Per the logic shown in the following image, Live API Creator must update ("adjust") the Order total and Customer balance for each Line Item.

Live API Creator must not retrieve these objects multiple times. This can incur substantial overhead and can make it difficult to ensure consistent results. Instead, Live API Creator maintains a cache for each transaction. All reads and writes go through the cache, and are flushed at the end of the transaction. This eliminates many SQLs, and ensures a consistent view of updated data.


Locking is a key performance factor. The following sections address this for delivering results to the client, and for processing update transactions.

GET: Optimistic Locking

A well-known pattern is optimistic locking. Acquiring locks while viewing data can reduce concurrency. Accordingly, locks are not acquired while processing GET requests. Live API Creator ensures that updated data has not been altered since initial retrieval, as described below.

For more information about optimistic locking, see optimistic concurrency control on Wikipedia.

PUT, POST and DELETE: leverage DBMS Locking and Transactions

Update requests are locked using DBMS Locking services. Consider the following cases:
      • Client Updates. In accordance with Optimistic Locking, Live API Creator ensures that client-submitted rows have not been altered since retrieved. This is done by write-locking the row using a time stamp, or (if one is not defined) by a hash code of all retrieved data. Observe this strategy means that a time stamp is not required. This process is done as the first part of the transaction, so optimistic locking issues are detected before SQL overhead is incurred.
      • Rule Chaining. All rows processed in a transaction as a consequence of logic execution, such as adjusting parent sums or counts, are read locked. Write locks are acquired at the end of the transaction, during the "flush" phase. Many other transactions' read locks could have been acquired and released between the time of the initial read lock and the flush.
      • Referential Integrity. Such data is read in accordance with DBMS policy.

Server Optimizations

The logic server promotes good performance.

Load Balanced Dynamic Clustering

Cloud-based Live API Creator implementations use the standard load balancer services to scale as many server instances as required to meet the load, and provide for failover. Each server is stateless, and incoming requests are load balanced over the set of running servers.

Meta Data Caching (Logic and Security)

The logic and security information you specify in API Creator is read into cache. Disk reads are not required to process each request. This cache is persisted over transactions until you alter your logic.

Direct Execution (No Code Generation)

Reactive logic is many-fold more expressive that procedural code. Compiling logic into JavaScript would therefore represent a significant performance issue. Reactive logic is therefore executed directly and not compiled into JavaScript.


Transparent information on system performance is an important requirement. 


You can view the logs of SQL and rule execution.

For more information, see the View Logging Information.


You can obtain aggregate information on the Metrics page.

For more information, see Metrics.

Subpages (1): Metrics