The logging is configured in the
<cxs-install-dir>/etc/org.ops4j.pax.logging.cfg file, so that by default the logging it routed into the
<cxs-install-dir>/data/log/karaf.log file. More details on how to tune logging settings and also on the log-related console commands is given here: https://karaf.apache.org/manual/latest/#_log. One of the most useful console commands (especially in development) is:
which continuously displays the log entries in the console.
7.2 How to backup Apache Unomi?
Backing up your system is useful in many cases as it minimizes the risk of losing all your data and a backup is a mandatory step in case of an upgrade or migration process. Apache Unomi by default is configured to write its runtime data directly into the ElasticSearch server, which itself writes information in its
<elasticsearch-install-dir>/data directory. There are several backup types, which serve different purposes:
- ElasticSearch snapshots: ElasticSearch also offers a built-in backup mechanism known as snapshots. You can find more information about this here: https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-snapshots.html
- Full Apache Unomi and ElasticSearch file system backup: is done by archiving the whole
<elasticsearch-install-dir>folders, with Apache Unomi and ElasticSearch processes stopped.
- Configuration backup: is done by archiving the
<elasticsearch-install-dir>/conf folders. Also, if you have modified any bin/setenv files, also backup those. This type of backup is usually done, before/after planned configuration updates.
- Runtime data file system backup: is performed by archiving the
<elasticsearch-install-dir>/datafolder. Useful for incremental (nightly) backups and allows rolling back to a previous stable/consistent state in case of data corruption/loss. This procedure however is not recommended because transient data will not be consistent, and ElasticSearch snapshots should be preferred instead.
The recommended way of backing up Apache Unomi is therefore the following:
- Setup and execute ElasticSearch snapshots. This is the only way to properly backup an ElasticSearch cluster with full data integrity guaranteed. A recommended solution is to automate the snapshot process, as this can be achieved for example by using cron jobs that execute curl requests to request ElasticSearch snapshot generations.
- Make a full configuration backup for both Apache Unomi and ElasticSearch
- Backup any customized changes you did (such as installed plugins) to Apache Unomi and ElasticSearch.
- (Optional) Full file system backup of Apache Unomi and ElasticSearch. This step is optional because if you have properly performed steps 1, 2 and 3 you should be able to reinstall everything you need to reinstall from the backups. However to be on the safe side a full file system backup is a good idea and doesn't require much work.
- Make sure you test your backup procedure to make sure that everything does backup properly and can effectively be restored. Remember that not testing your backup procedure is the same as having no backup procedure !
Note that this backup procedure will also work to copy environment to new clusters, even migrated to smaller cluster sizes (for example from 3 to 1 ES node for staging / development purposes). This is why snapshots are used, they make this type of migration easier. Another way of doing this would be to temporarily set the replicas to the same amount as nodes, but this method only works for small data sets and is not recommended for large ones.
7.3 Upgrading Apache Unomi
To check if there is any specific instruction related to the upgrade, please check our extranet documentation for upgrading between versions of Apache Unomi. Below are the usual steps :
7.3.1 Between minor versions (X.X.Y -> X.X.Z)
In order to upgrade Apache Unomi to a new version or "migrate" the data to a new installation it is right now sufficient to perform the following steps:
- Stop the old Apache Unomi
- Stop the ElasticSearch server
- Install a new version (or a new copy) of Apache Unomi
- Install a new version of the ElasticSearch version corresponding to the new version of Apache Unomi (if necessary)
- Copy the following folder from the old installation into a new one:
- Apply any custom changes in the configuration (file in the
<cxs-install-dir>/etcfolder) to a new instance of Apache Unomi
- Start the new instance of the ElasticSearch server.
- Start the new instance of Apache Unomi to complete the migration.
7.3.2 Between major versions (X.Y -> X.Z)
Please check our extranet documentation for upgrading between major versions of Apache Unomi.
7.4 Background jobs
This section contains a list of the background jobs that may be executed either by the Marketing Factory DX modules (5.4.1) or by the Apache Unomi (5.4.2).
7.5.1 Marketing Factory Jobs
|ContextServerClusterPollingJob||Every minute||Retrieve cluster information from Apache Unomi (nodes, hosts, load, …) in order to be able to distribute load to all Apache Unomi nodes|
|WemActionPurgeJob||Every hour at 10 minutes||Cancels (unschedules) and removes orphaned Marketing Factory action jobs in case the corresponding content node is no longer present|
Ask Apache Unomi to see if max hits are reached for optimization tests
7.5.2 Apache Unomi Jobs
|Refresh all property types||Every 5 seconds||Reloads all the property types from ElasticSearch every 5 seconds, in case there were new deployments done from Marketing Factory UIs or modules|
|Inactive profile purge||Every X days (180 by default)||Removes profiles from Apache Unomi that have been inactive for a specified amount of time (by default 180 days).|
|Update profiles for past event counting||Every 24h||Recalculates past event counts for all the profiles that match the setup conditions|
|Refresh segment and scoring definitions||Every second||Reloads the segment and scoring definitions from ElasticSearch in case another Apache Unomi node has performed modifications|
|Refresh index names (technical)||Every 24h||Updates the list of ElasticSearch indices cached in memory to make sure there are no inconsistencies with the actual back-end indices.