Backing up your system is useful in many cases as it minimizes the risk of losing all of your data, whether it is on the database or server side.
A database dump contains a record of the table structure and/or the data from a database, and it is usually in the form of a list of SQL statements. A database dump is useful for backing up a database so that its contents can be restored in the event of data loss (or in our case reusing an environment). It can be performed anytime (even when the Digital Experience Manager server is running), but it is usually preferable to shut down your Digital Experience Manager before dumping your database.
There are many software products (proprietary or Open Source) that can perform a database dump for all types of databases. Here, we will use the example of MySQL:
mysqldump -urootUser -p digitalExperienceManager7 > digital_experience_manager_7_v1.sql
You should backup the whole digital-factory-data folder. It includes modules, JCR repository, and other runtime data. If during the configuration wizard you’ve chosen filesystem-based binary storage (default option) and changed the location of the datastore folder, you should backup also that folder.
If you have no additional Web applications (or portlets) used inside your Digital Experience Manager server, you can skip this part. All the additional Web applications, you may have deployed, will be usually located on Apache Tomcat under:
<tomcat-home>/webapps
You can backup all web applications or only the one you use. If you installed some third-party portlets, be sure to check on their respective documentation. Depending on whether or not the webapp is storing information, the way you backup the webapp will be different. If the webapp stores nothing, you can either backup the .war file you had used to deploy the portlet or the subfolder of “webapps/” in which the webapp has been deployed. If the webapp stores some data, you will also have to backup it.
All major configuration files are situated under in the digital-factory-config folder and also under <digital-factory-web-app-dir>/WEB-INF/etc/ folder. If you are under UNIX, for regular backup of you Digital Experience Manager data, you can create a script file and run it through a Cron job. A typical example of this script could be:
DAY=`date +%u`
/bin/tar cvfz /home/backup/tomcat_$DAY.tar.gz /home/jahia/tomcat/ #list of folders to copy
Please refer your database documentation for specific instructions on how to perform this.
During the configuration wizard, instead of connecting to a new empty database, connect to your newly restored database. Uncheck the option to create the tables inside this database. Take care to specify the same value as you did for your former installation regarding the storage of the binaries (inside the database or on the filesystem). If you do not remember, open <digital-experience-manager-web-app-dir>/WEB-INF/etc/repository/jackrabbit/repository.xml and check the DataStore element, which could either be a DbDataStore or a FileDataStore. Do not start the application server at the end of the install process.
Apply your backed-up configuration (usually the digital-factory-config folder content is enough) to your new installation.
Deploy your templates set(s) and modules.
If you have chosen to store the binaries in your database, just skip this step. Copy your digital-factory-data/repository/ folder from your backup to your new installation. You will have the following structure:
repository
|_________datastore
|_________index
|_________version
|_________workspaces
| |___default
| | |____index
| | |____lock
| | |____repository.xml
| |___live
| |____index
| |____lock
| |____repository.xml
|_________indexing_configuration.xml
|_________indexing_configuration_version.xml
If you have chosen an alternative location of the datastore folder during the Digital Experience Manager configuration wizard (cluster installation), please restore it at the appropriate location.
Remove the 2 “lock” files. If possible, we also recommend you to also remove the 3 “index” folders. Those folders store the JCR indexes, which will be regenerated at first startup if missing. Regenerating it will improve the performances, but this operation will take a variable amount of time, depending on the amount of data you have. If you are doing an emergency restore of a production server, you can keep the former indexes to save time.
The safe backup restore here is only relevant, when you are restoring a DX clustered backup at another infrastructure, say cloning a production environment to preproduction/test. It is not needed in case of a normal restore of a DX environment.
For details, please, refer to section "8.3.1 Safe environment clone (aka Safe backup restore)" further in this document.
For the last step, you must restart your reinstalled Digital Experience Manager application.
As mentioned in the chapter “4.3.3 The front-end HTML cache layer”, you may sometimes get exceptions saying, “Module generation takes too long due to the module not generated fast enough (>10000 ms).” This happens when two requests try to get the same module output at the same time. To save resources, Digital Experience Manager decides to let just one request render the output and the other request wait for it. The maximum wait time is configured in jahia.properties with the parameter moduleGenerationWaitTime. If rendering the module takes longer than this time, the waiting request gets canceled with the exception.
The reasons for this exception are various. It could either be an indication that sufficient configured resources are lacking (number of database connections, heap memory, maximum number of file handles, etc.), bottlenecks (slow disk, locks, unnecessary synchronization, etc.), problems with modules (JSPs getting compiled, modules opening sockets and waiting for response without timeout, etc.) or bugs/performance issues in the code.
The best way to identify the issue is to analyze thread dumps. Along with the exception, Digital Experience Manager should have automatically created a thread dump (unless the server load is too high), which already is a good start. If the scenario is reproducible, it would also be good to create multiple thread dumps in short intervals of a few seconds (see Thread dump Management tool mentioned in chapter “6.4.1 System and Maintenance”, which is able to create multiple thread dumps).
The thread dump may, for instance, show that the JSP compilation is the cause of the problem. In this case, you have to ensure that JSPs are getting precompiled after deployment (see JSP Pre-Compilation tool in chapter “6.4.1 System and Maintenance”) before the server is exposed to public requests (e.g. keep it in the Maintenance Mode). In the error log, you should be able to see the URL of the request leading to the timeout, and you should see the cache-key of the module, that is not getting rendered quickly enough. You can also watch out for the other thread, which is rendering the same module and see whether, for instance, it is stuck in some slow or non-responding methods, locks etc.
You should also analyze the error log file from that time to see if there are other exceptions before or after the incident that indicates that the server is running out of resources. In such a case, you may have to utilize or configure more resources for the server.
It could also be an indication that the server is overloaded and not able to serve the number of requests. In such a case, you should think of running Digital Experience Manager in a cluster or add more cluster nodes to handle the expected load.
The /referencesKeeper node is used during the import of content/sites. Whenever there is a reference property in the imported content, where the value cannot be resolved immediately, because e.g. the path or UUID does not exist yet, we create(d) a jnt:reference entry under /referencesKeeper in order to resolve the reference at a later time, when this path or UUID gets available (e.g. after importing other related content). After the path gets available, the reference is correctly set and the node from referencesKeeper gets removed. Digital Experience Manager can’t know whether these references will be resolvable in the future, that’s why we do not delete them. On the other side, the problem is that this list can grow and grow.
If the number of referencesKeeper nodes is growing in your environment, you need to look at the nodes and identify from the j:node reference, the j:propertyName and j:originalUuid if the reason is an unresolvable reference found in one of your imported files. In that case, you need to fix the repository.xml (or live-repository.xml) in the import file and delete the corresponding jnt:reference nodes manually.
Since Digital Experience Manager 6.6.2.3, meaning also in 7.0.0, we have reduced the cases, where we make use of the referencesKeeper node, as we saw that on customer’s sites the number of sub-nodes could grow to hundred thousand, causing performance degradation on import and module deployment. We now also started to log a warning when the number of sub-nodes exceeds 5000. In that case, it is necessary to clean the nodes manually.
For that please go to the JCR query tool (see “6.4.5 JCR Data”), set limit to 10000 and use the SQL-2 request:
SELECT * FROM [jnt:reference]
You could also add a where clause if you want to delete just specific nodes, for which you know that they are unresolvable, but most of the time it will be seen that all of them are unresolvable. After entering the query and the limit activate the checkbox: "Show actions". After fetching the first 10000 results, select the link: "Delete ALL", which will remove all these 10000 entries. You will have to run the query multiple times until you get rid of all entries. You should do that at low-peak times. To run it overnight you could also raise the limit to e.g. 50000 (modify it in the URL: ...&limit=50000&offset=0&displayLimit=100) to remove 50000 references in one attempt.
This chapter contains an overview of the Apache HTTP Server (aka “httpd”) configuration to serve as a front-end server for Digital Experience Manager 7.2. Please, follow the instructions of the corresponding section, depending on the chosen communication type.
This section is related to the configuration where the requests are proxied to the Tomcat’s AJP connector (port 8009) or HTTP connector (port 8080). The mod_proxy_ajp or mod_proxy_http module is used in this case, so the following modules have to be enabled:
LoadModule proxy_module modules/mod_proxy.so
LoadModule proxy_ajp_module modules/mod_proxy_ajp.so
LoadModule proxy_http_module modules/mod_proxy_http.so
The configuration via mod_proxy_ajp is as follows:
<VirtualHost *:80>
ServerName digital-experience-manager-server
ProxyPreserveHost On
ProxyPass / ajp://localhost:8009/ connectiontimeout=20 timeout=300 ttl=120
ProxyPassReverse / ajp://localhost:8009/
</VirtualHost>
In a similar way, the configuration via mod_proxy_http is as follows:
<VirtualHost *:80>
ServerName digital-experience-manager-server
ProxyPreserveHost On
ProxyPass / http://localhost:8080/ connectiontimeout=20 timeout=300 ttl=120
ProxyPassReverse / http://localhost:8080/
</VirtualHost>
This section is related to the configuration where the requests are proxied to the Tomcat’s AJP connector (port 8009). The mod_jk module is used in this case, so it has to be enabled:
LoadModule jk_module modules/mod_jk.so
The configuration looks as follows:
JkWorkersFile conf/workers.properties
<VirtualHost *:80>
ServerName digital-experience-manager-server
ProxyPreserveHost On
JkMount / df
JkMount /* df
</VirtualHost>
And the workers.properties file content is:
worker.list=df
worker.df.port=8009
worker.df.host=localhost
worker.df.type=ajp13
worker.df.ping_mode=A
worker.df.socket_connect_timeout=10000
worker.df.reply_timeout=300000
worker.df.connection_pool_timeout=600
It is possible to add a new node to a cluster environment without using the installer. This can be done by cloning an existing one.
Before proceeding to such operation, you need to ensure that the following prerequisites are met:
Steps to follow:
Verifications:
Starting with Digital Factory 7.0.0.3 the process of manual synchronization of repository indexes between cluster nodes became easier.
This manual synchronization is not needed during the runtime, but could be quite useful in the following cases:
Please, follow the step to replicate the indexes from one node (source) to the other (target):
<target>/digital-factory/data/repository/index
<target>/digital-factory/data/repository/workspaces/default/index
<target>/digital-factory/data/repository/workspaces/live/index
[1] Connector settings, especially maxThreads and acceptCount values, should be adjusted accordingly to achieve high performance and scalability in a production run.
[2] For production systems, the memory options should be adjusted accordingly to achieve high performance and scalability.
There are several "actions" a DX server could be instructed to perform right on startup. The instructions are given by creating so-called marker files (it can be empty, only its name matters) on the file system, which is detected by DX on startup (not during runtime) and corresponding actions are performed. The marker files are "one-time" instructions, i.e. a marker file is deleted after it is detected by DX on startup so that the "actions" are performed once and not on consequent DX restarts.
In the following sections, available markers, the corresponding actions, and possible use cases for their usage are described.
The following marker files instruct DX to perform the described actions on startup. The locations may vary on your DX environment, but the markers should be located in the JCR repository home folder, which is configured via jahia.jackrabbit.home
in your jahia.properties
file and by default is located at digital-factory-data/repository
:
<jahia.jackrabbit.home>/reindex
- instructs DX to perform the full JCR content repository re-indexing on next startup.<jahia.jackrabbit.home>/index-fix
- tells DX to perform the JCR content indexes consistency check and repair on next startup.<jahia.jackrabbit.home>/index-check
- a consistency check (no repair, no changes) will be performed for JCR content indexes on next DX startup.There are several markers for startup options, which should help you in case of, for example, cloning of production environment (with later restore at another location, say pre-production one) or if you are using the Rolling upgrade DX feature.
The following marker file, when created, forces DX to disable mail service (if it was active): <digital-factory-data>/disable-mail-service
This option is mainly used as a part of another instruction in case of a safe clone of a DX environment into another location.
This marker is available since DX 7.2.3.2 / 7.3.0.
The following marker file, when created, instructs the server to do clean on startup the cluster membership information, which makes the DX re-discover the cluster members again. In effect, it does the following:
<digital-factory-data>/bundles-deployed/...
folder) where the clustering component (Hazelcast) stores the membership discovery informationThis option is mainly used as part of another instruction in case of a safe clone of a DX environemnt into another location or rolling upgrade procedure.
The marker file for this option is: <digital-factory-data>/reset-discovery-info
This marker is available since DX 7.2.3.2 / 7.3.0.
The following sections show the usage of startup options for various scenarios.
The procedure is mainly used, in case you would like to create a copy (snapshot) of your production clustered DX environment and restore it at another location (say, pre-production or test server).
The following marker is used in this case <DX_RESTORE_DIR>/digital-factory-data/safe-env-clone
, which instructs DX to perform the reset of the cluster discovery info and disabling of mail service (see sections above for a description of those actions). This marker is available since DX 7.2.3.2 / 7.3.0. For DX versions 7.2.1.1+ the name backup-restore
is used as an equivalent for this marker.
The procedure for the DX Rolling upgrade feature requires a reset of the cluster discovery information for a cluster node during the upgrade procedure.
A dedicated marker file, <digital-factory-data>/rolling-upgrade, is supported since DX 7.2.3.2 / 7.3.0, which does that action. See section "8.2.2 Reset cluster discovery info" for the details (in 7.2.3.2 / 7.3.0 reset-discovery-info is an equivalent of the rolling-upgrade).