Persistent cache
Last updated
Last updated
This is the official documentation of the @forestadmin/agent
Node.js agent.
The cache that is used under the hood is a SQL database. By default, an in-memory SQLite database will be used.
This is fine for many use cases but comes with a strong limitation: the data is not persisted between restarts.
This causes two issues:
Longer start time, as the agent will need to fetch all the data from the target API at each startup.
High memory usage, as the agent will need to keep all the data in memory.
Depending on which API you are targeting, it may be absolutely fine to use an in-memory cache. For example, it is unlikely that your payment processing API contains more than a few dozen thousand records which would result in minimal memory overhead.
Larger APIs, such as a CRM or a database, may contain millions of records: you may want to use a persistent cache in this case.
To ease development, Forest Admin will automatically detect when the schema of the tables in the caching database does not match the schema of the target API.
When this happens, it will drop all the tables and indexes, recreate them, and re-import all the data from the target API
Two options are available to use a persistent cache:
cacheInto
: a connection string, or a configuration object for the @forestadmin/datasource-sql
connector.
cacheNamespace
: a string that will be used as a prefix for all the tables created by the tool. This is useful if you either want to share the same database with other tools, or if you want to use the same database for multiple replicas.
As of today, when sharing the same cache between multiple instances of an agent, no locking mechanism is in place to prevent concurrent writes.
This means that if two agents are running at the same time, with the same configuration, they won't collaborate to keep the cache up-to-date while avoiding double work.
The simplest way to use a persistent cache is to use an SQLite file.
Those have the advantage of being trivial to set up, at the cost of being limited to a single process.
You may also want to use a SaSS database to store the replica.
In this example, we use a Postgres database hosted on Neon.tech ↗, but the same can be achieved with any cloud and database vendor.