How to backup Jahia?
Backing up your system is useful in many cases as it minimizes the risk of losing all of your data, whether it is on the database or server side.
A database dump contains a record of the table structure and/or the data from a database, and it is usually in the form of a list of SQL statements. A database dump is useful for backing up a database so that its contents can be restored in the event of data loss (or in our case reusing an environment). To ensure that the database dump does not have any inconsistencies which may be caused by heavy write operations, we recommended either of the following options before performing the dump:
- Shutdown the Jahia server and backup.
- Activate the Full Read-only mode and backup. With Full Read-only mode, your module must have been developed to take care of this mode so that the users who try to write into the JCR will not receive errors
There are many software products (proprietary or Open Source) that can perform a database dump for all types of databases. Here, we will use the example of MySQL:
mysqldump -urootUser -p digitalExperienceManager7 > digital_experience_manager_7_v1.sql
Jahia runtime data
You should backup the whole digital-factory-data folder. It includes modules, JCR repository and other runtime data. If during the configuration wizard you’ve chosen filesystem-based binary storage (default option) and changed the location of the datastore folder, you should backup also that folder.
Web applications and portlets
If you have no additional Web applications (or portlets) used inside your Jahia server, you can skip this part. All the additional Web applications, you may have deployed, will be usually located on Apache Tomcat under:
You can backup all web applications or only the one you use. If you installed some third-party portlets, be sure to check on their respective documentation. Depending on whether or not the webappis storing information, the way you backup the webapp will be different. If the webapp stores nothing, you can either backup the .war file you had used to deploy the portlet, or the subfolder of “webapps/” in which the webapp has been deployed. If the webapp stores some data, you will also have to backup it.
All major configuration files are situated under in the digital-factory-config folder and also under <digital-factory-web-app-dir>/WEB-INF/etc/ folder. If you are under UNIX, for regular backup of you Jahia data, you can create a script file and run it through a Cron job. A typical example of this script could be:
DAY=`date +%u` /bin/tar cvfz /home/backup/tomcat_$DAY.tar.gz /home/jahia/tomcat/ #list of folders to copy
How to restore an environment from a backup?
Restore your database dump
Please refer your database documentation for specific instructions of how to perform this.
During the configuration wizard, instead of connecting to a new empty database, connect to your newly restored database. Uncheck the option to create the tables inside this database. Take care to specify the same value as you did for your former installation regarding the storage of the binaries (inside the database or on the filesystem). If you do not remember, open <digital-experience-manager-web-app-dir>/WEB-INF/etc/repository/jackrabbit/repository.xml and check the DataStore element, which could either be a DbDataStore or a FileDataStore. Do not start the application server at the end of the install process.
Apply your specific configurations on your new installation
Apply your backed-up configuration (usually the digital-factory-config folder content is enough) to your new installation.
Deploy your templates and modules
Deploy your templates sets and modules.
Restore the binaries stored on the filesystem
If you have chosen to store the binaries in your database, just skip this step. Copy your digital-factory-data/repository/ folder from your backup to your new installation. You will have the following structure:
repository |_________datastore |_________index |_________version |_________workspaces | |___default | | |____index | | |____lock | | |____repository.xml | |___live | |____index | |____lock | |____repository.xml |_________indexing_configuration.xml |_________indexing_configuration_version.xml
If you have chosen an alternative location of the datastore folder during the Jahia configuration wizard (cluster installation), please restore it at the appropriate location.
Remove the 2 “lock” files. If possible, we also recommend you to also remove the 3 “index” folders. Those folders store the JCR indexes, which will be regenerated at first startup if missing. Regenerating it will improve the performances, but this operation will take a variable amount of time, depending on the amount of data you have. If you are doing an emergency restore of a production server, you can keep the former indexes to save time.
Safe backup restore
The safe backup restore here is only relevant, when you are restoring a Jahia clustered backup at another infrastructure, say cloning a production environment to preproduction/test. It is not needed in case of a normal restore of a Jahia environment.
For details, see Safe environment clone (aka Safe backup restore) in this document.
Restart the Jahia server
For the last step, you must restart your reinstalled Jahia application.
How to handle module generation timeouts?
As mentioned in The front-end HTML cache layer, you may sometimes get exceptions saying, “Module generation takes too long due to module not generated fast enough (>10000 ms).” This happens when two requests try to get the same module output at the same time. To save resources, Jahia decides to let just one request render the output and the other request wait for it. The maximum wait time is configured in jahia.properties with the parameter moduleGenerationWaitTime. If rendering the module takes longer than this time, the waiting request gets canceled with the exception.
The reasons for this exception are various. It could either be an indication that sufficient configured resources are lacking (number of database connections, heap memory, maximum number of file handles, etc.), bottlenecks (slow disk, locks, unnecessary synchronization, etc.), problems with modules (JSPs getting compiled, modules opening sockets and waiting for response without timeout, etc.) or bugs/performance issues in the code.
The best way to identify the issue is to analyze thread dumps. Along with the exception, Jahia should have automatically created a thread dump (unless the server load is too high), which already is a good start. If the scenario is reproducible, it would also be good to create multiple thread dumps in short intervals of a few seconds (see Thread dump Management tool, mentioned in System and Maintenance, which is able to create multiple thread dumps).
The thread dump may, for instance, show that the JSP compilation is the cause of the problem. In this case, you have to ensure that JSPs are getting precompiled after deployment (see the JSP Pre-Compilation tool in System and Maintenance) before the server is exposed to public requests (for example, keep it in the Maintenance Mode). In the error log you should be able to see the URL of the request leading to the timeout, and you should see the cache-key of the module, that is not getting rendered quickly enough. You can also watch out for the other thread, which is rendering the same module and see whether, for instance, it is stuck in some slow or non-responding methods, locks etc.
You should also analyze the error log file from that time to see if there are other exceptions before or after the incident that indicate that the server is running out of resources. In such a case, you may have to utilize or configure more resources for the server.
It could also be an indication that the server is overloaded and not able to serve the number of requests. In such a case, you should think of running Jahia in cluster or add more cluster nodes to handle the expected load.
How to clean referencesKeeper nodes?
The /referencesKeeper node is used during the import of content/sites. Whenever there is a reference property in the imported content, where the value cannot be resolved immediately, for example because the path or UUID does not exist yet, we create a jnt:reference entry under /referencesKeeper to resolve the reference at a later time, when this path or UUID gets available (for example, after importing other related content). After the path gets available, the reference is correctly set and the node from referencesKeeper gets removed. Jahia can’t know whether these references will be resolvable in future, that’s why we do not delete them. On the other side the problem is that this list can grow and grow.
If the number of referencesKeeper nodes is growing in your environment, you need to look at the nodes and identify from the j:node reference, the j:propertyName and j:originalUuid if the reason is an unresolvable reference found in one of your import files. In that case you need to fix the repository.xml (or live-repository.xml) in the import file and delete the corresponding jnt:reference nodes manually.
We log a warning when the number of sub-nodes of the referencesKeeper node exceeds 5000. In that case, it is necessary to clean the nodes manually.
For that please go to the JCR query tool (see JCR Data), set limit to 10000 and use the SQL-2 request:
SELECT * FROM [jnt:reference]
You could also add a where clause if you want to delete just specific nodes, for which you know that they are unresolvable, but most of the time it will be seen that all of them are unresolvable. After entering the query and the limit activate the checkbox: "Show actions". After fetching the first 10000 results, select the link: "Delete ALL", which will remove all these 10000 entries. You will have to run the query multiple times until you get rid of all entries. You should do that at low-peak times. To run it overnight you could also raise the limit, for example to 50000, (modify it in the URL: ...&limit=50000&offset=0&displayLimit=100) to remove 50000 references in one attempt.
How to configure Jahia to run behind Apache HTTP Server (httpd)
This chapter contains an overview of the Apache HTTP Server (aka “httpd”) configuration to serve as a front-end server for Jahia. Please, follow the instructions of the corresponding section, depending on chosen communication type.
Apache httpd 2.2.x / 2.4.x with mod_proxy_*
This section is related to the configuration where the requests are proxied to the Tomcat’s AJP connector (port 8009) or HTTP connector (port 8080). The mod_proxy_ajp or mod_proxy_http module is used in this case, so the following modules have to be enabled:
LoadModule proxy_module modules/mod_proxy.so LoadModule proxy_ajp_module modules/mod_proxy_ajp.so LoadModule proxy_http_module modules/mod_proxy_http.so
The configuration via mod_proxy_ajp in this case is as follows:
<VirtualHost *:80> ServerName digital-experience-manager-server ProxyPreserveHost On ProxyPass / ajp://localhost:8009/ connectiontimeout=20 timeout=300 ttl=120 ProxyPassReverse / ajp://localhost:8009/ </VirtualHost>
In a similar way, the configuration via mod_proxy_http is as follows:
<VirtualHost *:80> ServerName digital-experience-manager-server ProxyPreserveHost On ProxyPass / http://localhost:8080/ connectiontimeout=20 timeout=300 ttl=120 ProxyPassReverse / http://localhost:8080/ </VirtualHost>
Apache httpd 2.2.x / 2.4.x with mod_jk
This section is related to the configuration where the requests are proxied to the Tomcat’s AJP connector (port 8009). The mod_jk module is used in this case, so it has to be enabled:
LoadModule jk_module modules/mod_jk.so
The configuration looks as follows:
JkWorkersFile conf/workers.properties <VirtualHost *:80> ServerName digital-experience-manager-server ProxyPreserveHost On JkMount / df JkMount /* df </VirtualHost>
And the workers.properties file content is:
worker.list=df worker.df.port=8009 worker.df.host=localhost worker.df.type=ajp13 worker.df.ping_mode=A worker.df.socket_connect_timeout=10000 worker.df.reply_timeout=300000 worker.df.connection_pool_timeout=600
How to add a new node to a cluster environment by cloning an existing one?
It is possible to add a new node to a cluster environment without using the installer. This can be done by cloning an existing one.
Before proceeding to such operation, you need to ensure that the following prerequisites are met:
- The correct version of Oracle Java is already installed
- The default Tomcat ports are not used
- The nodes are in the same network
- The new node is allowed to access the Jahia database
- Your license allows this new node with its IP address
Steps to follow:
- Stop completely a working node, which is not the processing one (to prevent any modification of the indexes)
- Create an archive of the folder "DX_HOME" without the datastore, transfer the archive to the new server and uncompressed it
- Modify the following files:
- Property "cluster.node.serverId": set it to a unique value (for example the FQDN of the server)
- Property "processing Server": check that it's set to "false" if there is already a processing server in your cluster environment.
- Mount the network shared folder related to the datastore (which corresponds to the property "jahia.jackrabbit.datastore.path" of Jahia_HOME/digital-factory-config/jahia/jahia.properties)
- Start both the stopped and new nodes
- Verify that you don't have any error in the logs
- Go to the URL "DX_URL/modules/tools/cluster.jsp" and verify that all the cluster nodes are ok
- Create a content in Jahia and verify that it is accessible from any nodes
How to copy repository indexes to other cluster nodes?
This manual synchronization is not needed during the runtime, but could be quite useful in the following cases:
- A cluster node was down for quite a long time (“cold standby” case) and its startup should be made fast avoiding to replay the repository changelog journal (the journal records all the content modifications on other cluster nodes, which needs to be replayed by this one to make the index up-to-date).
- Indexes of one node are physically corrupted and need to be replaced by “healthy” indexes from another node in the cluster.
- Assuming a full repository content re-indexing was performed on a processing node (say, during the Digital Factory upgrade process) and you would like to synchronize those indexes to other cluster members.
Please, follow the step to replicate the indexes from one node (source) to the other (target):
- Shut the source server down and wait for a shutdown to complete
- Shut the target server down
- Delete the indexes folders on the target server:
- Copy the corresponding indexes folders from the source server to the target. You could copy the content of the <source>/digital-factory/data/repository to the <target>/digital-factory/data/repository, omitting the datastore folder.
- Copy the file <source>/digital-factory/data/repository/revisionNode to the <target>/digital-factory/data/repository folder.
- You can start now the source and the target nodes.
 Connector settings, especially maxThreads and acceptCount values, should be adjusted accordingly to achieve high performance and scalability in production run.
 For production systems, the memory options should be adjusted accordingly to achieve high performance and scalability.
Jahia startup options
There are several "actions" a Jahia server could be instructed to perform right on startup. The instructions are given by creating so-called marker files (it can be empty, only its name matters) on the file system, which are detected by Jahia on startup (not during runtime) and corresponding actions are performed. The marker files are "one time" instructions, a marker file is deleted after it is detected by Jahia on startup, so that the "actions" are performed once and not on consequent Jahia restarts.
In the following sections, available markers, the corresponding actions and possible use cases for their usage are described.
Indexing startup options
The following marker files instruct Jahia to perform the described actions on startup. The locations may vary on your Jahia environment, but the markers should be located in the JCR repository home folder, which is configured via
jahia.jackrabbit.home in your
jahia.properties file and by default is located at
<jahia.jackrabbit.home>/reindex- instructs Jahia to perform the full JCR content repository re-indexing on next startup.
<jahia.jackrabbit.home>/index-fix- tells Jahia to perform the JCR content indexes consistency check and repair on next startup.
<jahia.jackrabbit.home>/index-check- a consistency check (no repair, no changes) will be performed for JCR content indexes on next Jahia startup.
Environment and cluster startup options
There are several markers for startup options, which should help you in case of, for example, cloning of production environment (with later restore at another location, say pre-production one) or if you are using the Rolling upgrade Jahia feature.
Disable mail service option
The following marker file, when created, forces Jahia to disable mail service (if it was active):
This option is mainly used as part of another instruction in case of a safe clone of a Jahia environment into another location.
The following marker file, when created, instructs the server to do clean on startup the cluster membership information, which makes the Jahia re-discover the cluster members again. In effect, it does the following:
- purges the JGROUPSPING database table data, which is responsible for the cluster membership discovery. This avoids the restored Jahia instance trying to connect to the "source" cluster nodes (in case of a clone of an environment, it is usually the "source" production cluster)
- the deletion of the discovery.config file (located under
<digital-factory-data>/bundles-deployed/...folder) where the clustering component (Hazelcast) stores the membership discovery information
This option is mainly used as part of another instruction in case of a safe clone of a Jahia environment into another location or rolling upgrade procedure.
The marker file for this option is:
Use cases for Jahia startup options
The following sections show the usage of startup options for various scenarios.
The procedure is mainly used, in case you would like to create a copy (snapshot) of your production clustered Jahia environment and restore it at another location (say, pre-production or test server).
The following marker is used in this case
<DX_RESTORE_DIR>/digital-factory-data/safe-env-clone, which instructs Jahia to perform the reset of the cluster discovery info and disabling of mail service (see sections above for description of those actions).
The procedure for the Jahia Rolling upgrade feature requires a reset of the cluster discovery information for a cluster node during the upgrade procedure.
A dedicated marker file,
<digital-factory-data>/rolling-upgrade, is supported since DX 18.104.22.168 and 7.3.0, which does that action. See Reset cluster discovery info for the details (in DX 22.214.171.124 and 7.3.0
reset-discovery-info is an equivalent of the