Some logging is configured in the centralized configuration file <jcustomer-install-dir>/etc/unomi.custom.system.properties
with the properties starting with org.apache.unomi.logs.*
. If you need more fine-grained configuration changes you can do them in the <jcustomer-install-dir>/etc/org.ops4j.pax.logging.cfg
file, so that by default the logging it routed into the <jcustomer-install-dir>/data/log/karaf.log
file. More details on how to tune logging settings and also on the log-related console commands is given here: https://karaf.apache.org/manual/latest/#_log. One of the most useful console commands (especially in development) is:
log:tail
which continuously displays the log entries in the console.
Backing up your system is useful in many cases as it minimizes the risk of losing all your data and a backup is a mandatory step in case of an upgrade or migration process. jCustomer by default is configured to write its runtime data directly into the Elasticsearch server, which itself writes information in its <elasticsearch-install-dir>/data
directory. There are several backup types, which serve different purposes:
<jcustomer-install-dir>/
and <elasticsearch-install-dir>
folders, with jCustomer and Elasticsearch processes stopped. <jcustomer-install-dir>/etc
and <elasticsearch-install-dir>
/conf folders. Also, if you have modified any bin/setenv files, also backup those. This type of backup is usually done, before/after planned configuration updates.<elasticsearch-install-dir>/data
folder. Useful for incremental (nightly) backups and allows rolling back to a previous stable/consistent state in case of data corruption/loss. This procedure however is not recommended because transient data will not be consistent, and ElasticSearch snapshots should be preferred instead.The recommended way of backing up jCustomer is therefore the following:
Note that this backup procedure will also work to copy environment to new clusters, even migrated to smaller cluster sizes (for example from 3 to 1 ES node for staging / development purposes). This is why snapshots are used, they make this type of migration easier. Another way of doing this would be to temporarily set the replicas to the same amount as nodes, but this method only works for small data sets and is not recommended for large ones.
This section contains a list of the background jobs that may be executed either by the jExperience Jahia modules or jCustomer .
Name | Frequency | Details |
---|---|---|
ContextServerClusterPollingJob | Every minute | Retrieve cluster information from jCustomer (nodes, hosts, load) in order to be able to distribute load to all jCustomer nodes |
WemActionPurgeJob | Every hour at 10 minutes | Cancels (unschedules) and removes orphaned jExperience action jobs in case the corresponding content node is no longer present |
OptimizationTestHitsJob |
Every hour |
Ask jCustomer to see if max hits are reached for optimization tests |
Name | Frequency | Details |
Refresh all property types | Every 5 seconds | Reloads all the property types from Elasticsearch every 5 seconds, in case there were new deployments done from jExperience UIs or modules |
Inactive profile purge | Every X days (180 by default) | Removes profiles from jCustomer that have been inactive for a specified amount of time (by default 180 days). |
Update profiles for past event counting | Every 24h | Recalculates past event counts for all the profiles that match the setup conditions |
Refresh segment and scoring definitions | Every second | Reloads the segment and scoring definitions from Elasticsearch in case another jCustomer node has performed modifications |
Refresh index names (technical) | Every 24h | Updates the list of Elasticsearch indices cached in memory to make sure there are no inconsistencies with the actual back-end indices. |