Skip to content

Integration Best Practices

This article explains the common best practices that should be considered when integrating with Vault.

If any custom configuration is required to Vault as part of the integration this should be documented. Custom configuration may include custom tabs, documents, objects, lifecycles, connections, integration points, Vault Java SDK code, data, and more. Exporting VPKs is another method for moving custom configuration between Vaults.

Integration Configuration Settings

Section link for Integration Configuration Settings

To make the integrations easily maintainable, integration-specific Vault settings should be made configurable within the integrated solution wherever possible to avoid hardcoding changes and any resulting recompilation and revalidation of the solution.

Utilize bulk processing whenever possible. We recommend the following bulk processing tools and approaches.

Vault Loader has been developed with best practices and tested by Veeva. Taking advantage of the Vault Loader API when transferring data into and out of Vault can greatly reduce the development time, as it handles processing, batching, and error reporting.

Integrations should also be done using Bulk APIs for data loading and data verification. Bulk APIs allow you to process many records or documents with a single API call. These APIs are designed for higher data throughput and will minimize the number of calls required.

VQL or Vault Query Language uses an SQL-like statement to be able to retrieve multiple records in a single Query API call. This is a far better alternative to making repeated calls to individual object APIs and should always be used wherever possible.

When either an object API or VQL query returns multiple records, Veeva paging should be used. This prevents the need for having to manually re-execute the cursor for each page and hence will result in far faster retrieval of data.

File Staging and File Staging APIs

Section link for File Staging and File Staging APIs

For integrations which require the loading or retrieval of large numbers of documents, each Vault comes with its own file staging to speed up this process and to limit the number of API calls being made. The recommended way to access your Vault's file staging is using either the File Staging API or file staging command line interface.

Where reference data is used between systems, caching should be used. This prevents the need for potentially having to repeatedly retrieve the same reference data.

When passing data via Vault API, it’s very important to consider API rate limits. If the limits are breached, integrations will be throttled. To mitigate the limits being breached bulk versions of APIs should always be used wherever possible.

In order to enable the cross referencing of data between Vault and the integrated system, we recommend storing the external system’s record identifiers in both systems, wherever possible. For example, if a Vault document is copied into the application the Vault document ID should be stored within the integrated system. Conversely, documents and objects in Vault can be used to store the external IDs as metadata properties.

We recommend the following approaches to manage the security of your integration.

Where possible, you should use a named account for a Vault session within an integration rather than a system account. Using named accounts ensures that the user in question will have the appropriate permissions on affected objects.

Sending Session IDs with Post Message

Section link for Sending Session IDs with Post Message

Within Vault it is possible to call services within third-party systems by calling a service URL from within web actions, web tabs and web sections. When this method is used, your integration should post the SESSION_ID of the currently logged in user to ensure it is secure. Learn more about sending session ID with a post message.

Once a session has been established we recommend using the same session for API calls rather than establishing new sessions, by periodically calling the Session Keep Alive API. However, sessions timeout after a period of inactivity or after 48 hours. In the event this occurs, a mechanism needs to be established to reauthenticate a user before making any further API calls.

If your integration uses multiple concurrent Vault sessions, such as multi-threading or parallel instances, you must consider how the integration manages this. When possible, you should opt to reuse a single session.

Checking the Authenticated Vault ID

Section link for Checking the Authenticated Vault ID

When authenticating via Vault API using one of the Authentication API endpoints, you should verify the vaultId returned in the response is for the intended Vault. If the specified Vault isn't accessible for some reason, such as for scheduled maintenance, and the user has access to multiple Vaults, they may be authenticated to another of their Vaults instead. Without verifying, they could inadvertently make changes to the wrong Vault with subsequent API calls.

You should define the error handling strategy for each integration's parts. It’s important that any errors are suitably trapped, reported, and handled consistently. For instance, should an error occur in the Vault UI, a suitable error message should be displayed to the user, along with a way to troubleshoot the precise error, such as displaying a full stack trace.

Working with distributed systems, temporary errors sometimes occur such as brief network outages or unavailability of a downstream system. This results in the need to be able to either resume or retry the transfer of data. Techniques such as implementing retry logic with an exponential backoff and using idempotency keys to ensure data is only transferred once, can be key aspects of a successful error handling strategy.

It’s also necessary to consider what happens if an error occurs midway through a process, leaving data in an inconsistent state. In these cases, you must either recover the data or resume the previously failed call.

Error logging should be possible within the integrated system, in order to trace any errors that could occur. Any Vault API calls will automatically be logged within Vault to be able to determine the integration they originated from.

In any integrations that use Vault API, we recommend setting the Client ID. Should any errors occur during the migration, Veeva will be better able to assist in determining what may have gone wrong by utilizing the client ID.

We recommend the following approaches to test your integration.

We recommend creating a sandbox Vault from the production (or validation) Vault in order to test your integration. This would typically be done in conjunction with an implementation consultant. It will also be necessary to link the Vault to the third-party system sandbox and populate any integration specific configuration settings.

A full set of tests should be carried out to test the integration logic, data, and any error conditions are successfully handled. If the integration tests fail, any issues should be corrected in the development environment before being reapplied to the test environment.

We recommend that you configure the DNS time to live (TTL) value to be no more than 60 seconds. This ensures that when a Vault's resource IP address changes, your integration can receive and use the new IP address by re-querying the DNS.

Default TTL varies depending on your JVM version and whether a security manager is in use. If the JVM default TTL is 60 seconds or less and a security manager is not in use, there are no changes necessary.

For some Java configurations, the JVM default TTL only refreshes DNS entries upon the JVM restarting. This means you must manually restart the JVM to refresh cached IP information when the IP address for a resource changes while your application is running. Changing the JVM’s TTL to periodically refresh cached IP information avoids JVM restarts for this reason.

To modify the JVM TTL, set the networkaddress.cache.ttl value to 60 using one of the following methods, depending on your needs:

  • To apply the TTL globally for all applications using the JVM, set the following in the $JAVA_HOME/jre/lib/security/java.security file: networkaddress.cache.ttl=60
  • To apply the JVM TTL for a single application, set the following in the initialization code: java.security.Security.setProperty("networkaddress.cache.ttl" , "60");

If you have questions, please reach out to our Developer Support team on Veeva Connect.