FAQ

November 11, 2022

1 How to backup Digital Experience Manager?

Backing up your system is useful in many cases as it minimizes the risk of losing all of your data, whether it is on the database or server side.

1.1 Database

A database dump contains a record of the table structure and/or the data from a database, and it is usually in the form of a list of SQL statements. A database dump is useful for backing up a database so that its contents can be restored in the event of data loss (or in our case reusing an environment). It can be performed anytime (even when the Digital Experience Manager server is running), but it is usually preferable to shut down your Digital Experience Manager before dumping your database.

There are many software products (proprietary or Open Source) that can perform a database dump for all types of databases. Here, we will use the example of MySQL:

mysqldump -urootUser -p digitalExperienceManager7 > digital_experience_manager_7_v1.sql

1.2 Digital Experience Manager runtime data

You should backup the whole digital-factory-data folder. It includes modules, JCR repository and other runtime data. If during the configuration wizard you’ve chosen filesystem-based binary storage (default option) and changed the location of the datastore folder, you should backup also that folder.

1.3 Web applications/portlets

If you have no additional Web applications (or portlets) used inside your Digital Experience Manager server, you can skip this part. All the additional Web applications, you may have deployed, will be usually located on Apache Tomcat under:

<tomcat-home>/webapps

You can backup all web applications or only the one you use. If you installed some third party portlets, be sure to check on their respective documentation. Depending on wetheror not the webappis storing information, the way you backup the webapp will be different. If the webapp stores nothing, you can either backup the .war file you had used to deploy the portlet, or the subfolder of “webapps/” in which the webapp has been deployed. If the webapp stores some data, you will also have to backup it.

1.4 Configuration files

All major configuration files are situated under in the digital-factory-config folder and also under <digital-factory-web-app-dir>/WEB-INF/etc/ folder. If you are under UNIX, for regular backup of you Digital Experience Manager data, you can create a script file and run it through a Cron job. A typical example of this script could be:

DAY=`date +%u`
/bin/tar cvfz /home/backup/tomcat_$DAY.tar.gz /home/jahia/tomcat/ #list of folders to copy

2 How to restore an environment from a backup?

2.1 Restore your database dump

Please refer your database documentation for specific instructions of how to perform this.

2.2 Reinstall Digital Experience Manager

During the configuration wizard, instead of connecting to a new empty database, connect to your newly restored database. Uncheck the option to create the tables inside this database. Take care to specify the same value as you did for your former installation regarding the storage of the binaries (inside the database or on the filesystem). If you do not remember, open <digital-experience-manager-web-app-dir>/WEB-INF/etc/repository/jackrabbit/repository.xml and check the DataStore element, which could either be a DbDataStore or a FileDataStore. Do not start the application server at the end of the install process.

2.3 Apply your specific configurations on your new installation

Apply your backed up configuration (usually the digital-factory-config folder content is enough) to your new installation.

2.4 Deploy your templates and modules

Deploy your templates set(s) and modules.

2.5 Restore the binaries stored on the filesystem

If you have chosen to store the binaries in your database, just skip this step. Copy your digital-factory-data/repository/ folder from your backup to your new installation. You will have the following structure:

repository
|_________datastore
|_________index
|_________version
|_________workspaces
| |___default
| | |____index
| | |____lock
| | |____repository.xml
| |___live
| |____index
| |____lock
| |____repository.xml
|_________indexing_configuration.xml
|_________indexing_configuration_version.xml

If you have chosen an alternative location of the datastore folder during the Digital Experience Manager configuration wizard (cluster installation), please restore it at the appropriate location.

Remove the 2 “lock” files. If possible, we also recommend you to also remove the 3 “index” folders. Those folders store the JCR indexes, which will be regenerated at first startup if missing. Regenerating it will improve the performances, but this operation will take a variable amount of time, depending on the amount of data you have. If you are doing an emergency restore of a production server, you can keep the former indexes to save time.

2.6 Safe backup restore (DX 7.1.2.6+)

Starting with DX 7.1.2.6 we've introduced a special handling for a safe restore of a backup environment, which happens on the startup of a restored DX instance.

It includes currently:

  • the purge of the JGROUPSPING database table data, which is responsible for the cluster membership discovery. This avoids the restored DX instance trying to connect to the "source" cluster nodes (usually, the "source" is a production cluster)
  • the mail service is disabled (in case it was enabled on the "source" DX instance). This prevents the restore DX instance to connnect to the mail server, used by the source instance, for sending e-mail messages.

In order to use the safe backup restore, please, before starting the restored DX instance, create a "marker" file (just an empty file) with the name backup-restore under your <digital-factory-data> folder, i.e. <DX_RESTORE_DIR>/digital-factory-data/backup-restore on each of the DX nodes, you are restoring. This will give DX a hint that the safe back restore procedure should be "triggered" during startup.

2.7 Restart the Digital Experience Manager server

For the last step you must restart your reinstalled Digital Experience Manager application.

3 How to handle module generation timeouts?

As mentioned in chapter “4.3.3 The front-end HTML cache layer”, you may sometimes get exceptions saying, “Module generation takes too long due to module not generated fast enough (>10000 ms).” This happens when two requests try to get the same module output at the same time. To save resources, Digital Experience Manager decides to let just one request render the output and the other request wait for it. The maximum wait time is configured in jahia.properties with the parameter moduleGenerationWaitTime. If rendering the module takes longer than this time, the waiting request gets cancelled with the exception.

The reasons for this exception are various. It could either be an indication that sufficient configured resources are lacking (number of database connections, heap memory, maximum number of file handles, etc.), bottlenecks (slow disk, locks, unnecessary synchronization, etc.), problems with modules (JSPs getting compiled, modules opening sockets and waiting for response without timeout, etc.) or bugs/performance issues in the code.

The best way to identify the issue is to analyze thread dumps. Along with the exception, Digital Experience Manager should have automatically created a thread dump (unless the server load is too high), which already is a good start. If the scenario is reproducible, it would also be good to create multiple thread dumps in short intervals of a few seconds (see Thread dump Management tool mentioned in chapter “6.4.1 System and Maintenance”, which is able to create multiple thread dumps).

The thread dump may, for instance, show that the JSP compilation is the cause of the problem. In this case you have to ensure that JSPs are getting precompiled after deployment (see JSP Pre-Compilation tool in chapter “6.4.1 System and Maintenance”) before the server is exposed to public requests (e.g. keep it in the Maintenance Mode). In the error log you should be able to see the URL of the request leading to the timeout, and you should see the cache-key of the module, that is not getting rendered quickly enough. You can also watch out for the other thread, which is rendering the same module and see whether, for instance, it is stuck in some slow or non-responding methods, locks etc.

You should also analyze the error log file from that time to see if there are other exceptions before or after the incident that indicate that the server is running out of resources. In such a case, you may have to utilize or configure more resources for the server.

It could also be an indication that the server is overloaded and not able to serve the number of requests. In such a case, you should think of running Digital Experience Manager in cluster or add more cluster nodes to handle the expected load.

4 How to clean referencesKeeper nodes?

The /referencesKeeper node is used during the import of content/sites. Whenever there is a reference property in the imported content, where the value cannot be resolved immediately, because e.g. the path or UUID does not exist yet, we create(d) a jnt:reference entry under /referencesKeeper in order to resolve the reference at a later time, when this path or UUID gets available (e.g. after importing other related content). After the path gets available, the reference is correctly set and the node from referencesKeeper gets removed. Digital Experience Manager can’t know whether these references will be resolvable in future, that’s why we do not delete them. On the other side the problem is that this list can grow and grow.

If the number of referencesKeeper nodes is growing in your environment, you need to look at the nodes and identify from the j:node reference, the j:propertyName and j:originalUuid if the reason is an unresolvable reference found in one of your import files. In that case you need to fix the repository.xml (or live-repository.xml) in the import file and delete the corresponding jnt:reference nodes manually.

Since Digital Experience Manager 6.6.2.3, meaning also in 7.0.0, we have reduced the cases, where we make use of the referencesKeeper node, as we saw that on customer’s sites the number of sub-nodes could grew to hundred thousands, causing performance degradation on import and module deployment. We now also started to log a warning when the number of sub-nodes exceeds 5000. In that case it is necessary to clean the nodes manually.

For that please go to the JCR query tool (see “6.4.5 JCR Data”), set limit to 10000 and use the SQL-2 request:

SELECT * FROM [jnt:reference]

You could also add a where clause if you want to delete just specific nodes, for which you know that they are unresolvable, but most of the time it will be seen that all of them are unresolvable. After entering the query and the limit activate the checkbox: "Show actions". After fetching the first 10000 results, select the link: "Delete ALL", which will remove all these 10000 entries. You will have to run the query multiple times until you get rid of all entries. You should do that at low-peak times. To run it overnight you could also raise the limit to e.g. 50000 (modify it in the URL: ...&limit=50000&offset=0&displayLimit=100) in order to remove 50000 references in one attempt.

5 How to configure Digital Experience Manager to run behind Apache HTTP Server (httpd)

This chapter contains an overview of the Apache HTTP Server (aka “httpd”) configuration to serve as a front-end server for Digital Experience Manager 7.1. Please, follow the instructions of the corresponding section, depending on chosen communication type.

5.1 Apache httpd 2.2.x / 2.4.x with mod_proxy_*

This section is related to the configuration where the requests are proxied to the Tomcat’s AJP connector (port 8009) or HTTP connector (port 8080). The mod_proxy_ajp or mod_proxy_http module is used in this case, so the following modules have to be enabled:

LoadModule proxy_module modules/mod_proxy.so
LoadModule proxy_ajp_module modules/mod_proxy_ajp.so
LoadModule proxy_http_module modules/mod_proxy_http.so

5.1.1 Using mod_proxy_ajp

The configuration via mod_proxy_ajp in this case is as follows:

<VirtualHost *:80>
    ServerName digital-experience-manager-server
    ProxyPreserveHost On
    ProxyPass / ajp://localhost:8009/ connectiontimeout=20 timeout=300 ttl=120
    ProxyPassReverse / ajp://localhost:8009/
</VirtualHost>

5.1.2 Using mod_proxy_http

In a similar way, the configuration via mod_proxy_http is as follows:

<VirtualHost *:80>
    ServerName digital-experience-manager-server
    ProxyPreserveHost On
    ProxyPass / http://localhost:8080/ connectiontimeout=20 timeout=300 ttl=120
    ProxyPassReverse / http://localhost:8080/
</VirtualHost>

5.2 Apache httpd 2.2.x / 2.4.x with mod_jk

This section is related to the configuration where the requests are proxied to the Tomcat’s AJP connector (port 8009). The mod_jk module is used in this case, so it has to be enabled:

LoadModule jk_module modules/mod_jk.so

The configuration looks as follows:

JkWorkersFile conf/workers.properties
<VirtualHost *:80>
    ServerName digital-experience-manager-server
    ProxyPreserveHost On
    JkMount / df
    JkMount /* df
</VirtualHost>

And the workers.properties file content is:

worker.list=df
worker.df.port=8009
worker.df.host=localhost
worker.df.type=ajp13
worker.df.ping_mode=A
worker.df.socket_connect_timeout=10000
worker.df.reply_timeout=300000
worker.df.connection_pool_timeout=600

6 How to add a new node to a cluster environment by cloning an existing one?

It is possible to add a new node to a cluster environment without using the installer. This can be done by cloning an existing one.

Before proceeding to such operation, you need to ensure that the following prerequisites are met:

  • The correct version of Oracle Java is already installed
  • The default Tomcat ports are not used
  • The nodes are in the same network
  • The new node is allowed to access the DX database
  • Your license allows this new node with its IP address

Steps to follow:

  1. Stop completely a working node, which is not the processing one (to prevent any modification of the indexes)
  2. Create an archive of the folder "DX_HOME" without the datastore, transfer the archive to the new server and uncompress it
  3. Modify the following files:
    • DX_HOME/digital-factory-config/jahia/jahia.node.properties:
    • Property "cluster.node.serverId": set it to a unique value (for example the FQDN of the server)
    • Property "processingServer": check that it's set to "false" if there is already a processing server in your cluster environment.
  4. Mount the network shared folder related to the datastore (which corresponds to the property "jahia.jackrabbit.datastore.path" of DX_HOME/digital-factory-config/jahia/jahia.properties)
  5. Start both the stopped and new nodes

Verifications:

  • Verify that you don't have any error in the logs
  • Go to the URL "DX_URL/modules/tools/cluster.jsp" and verify that all the cluster nodes are ok
  • Create a content in DX and verify that it is accessible from any nodes

7 How to copy repository indexes to other cluster nodes?

Starting with Digital Factory 7.0.0.3 the process of manual synchronization of repository indexes between cluster nodes became easier.

This manual synchronization is not needed during the runtime, but could be quite useful in the following cases:

  • A cluster node was down for quite a long time (“cold standby” case) and its startup should be made fast avoiding to replay the repository changelog journal (the journal records all the content modifications on other cluster nodes, which needs to be replayed by this one to make the index up-to-date).
  • Indexes of one node are physically corrupted and need to be replaced by “healthy” indexes from another node in the cluster.
  • Assuming a full repository content re-indexing was performed on a processing node (say, during the Digital Factory upgrade process) and you would like to synchronize those indexes to other cluster members.

Please, follow the step to replicate the indexes from one node (source) to the other (target):

  1. Shut the source server down and wait for a shutdown to complete
  2. Shut the target server down
  3. Delete the indexes folders on the target server:
    <target>/digital-factory/data/repository/index
    <target>/digital-factory/data/repository/workspaces/default/index
    <target>/digital-factory/data/repository/workspaces/live/index
  4. Copy the corresponding indexes folders from the source server to the target. You could copy the content of the <source>/digital-factory/data/repository to the <target>/digital-factory/data/repository, omitting the datastore folder.
  5. Copy the file <source>/digital-factory/data/repository/revisionNode to the <target>/digital-factory/data/repository folder.
  6. You can start now the source and the target nodes.