FAQs

November 14, 2023

How to handle module generation timeouts?

As mentioned in The front-end HTML cache layer, you may sometimes get exceptions saying, “Module generation takes too long due to module not generated fast enough (>10000 ms).” This happens when two requests try to get the same module output at the same time. To save resources, Jahia decides to let just one request render the output and the other request wait for it. The maximum wait time is configured in jahia.properties with the parameter moduleGenerationWaitTime. If rendering the module takes longer than this time, the waiting request gets canceled with the exception.

The reasons for this exception are various. It could either be an indication that sufficient configured resources are lacking (number of database connections, heap memory, maximum number of file handles, etc.), bottlenecks (slow disk, locks, unnecessary synchronization, etc.), problems with modules (JSPs getting compiled, modules opening sockets and waiting for response without timeout, etc.) or bugs/performance issues in the code.

The best way to identify the issue is to analyze thread dumps. Along with the exception, Jahia should have automatically created a thread dump (unless the server load is too high), which already is a good start. If the scenario is reproducible, it would also be good to create multiple thread dumps in short intervals of a few seconds (see Thread dump Management tool, mentioned in System and Maintenance, which is able to create multiple thread dumps).

The thread dump may, for instance, show that the JSP compilation is the cause of the problem. In this case, you have to ensure that JSPs are getting precompiled after deployment (see the JSP Pre-Compilation tool in System and Maintenance) before the server is exposed to public requests (for example, keep it in the Maintenance Mode). In the error log you should be able to see the URL of the request leading to the timeout, and you should see the cache-key of the module, that is not getting rendered quickly enough. You can also watch out for the other thread, which is rendering the same module and see whether, for instance, it is stuck in some slow or non-responding methods, locks etc.

You should also analyze the error log file from that time to see if there are other exceptions before or after the incident that indicate that the server is running out of resources. In such a case, you may have to utilize or configure more resources for the server.

It could also be an indication that the server is overloaded and not able to serve the number of requests. In such a case, you should think of running Jahia in cluster or add more cluster nodes to handle the expected load.

How to clean referencesKeeper nodes?

The /referencesKeeper node is used during the import of content/sites. Whenever there is a reference property in the imported content, where the value cannot be resolved immediately, for example because the path or UUID does not exist yet, we create a jnt:reference entry under /referencesKeeper to resolve the reference at a later time, when this path or UUID gets available (for example, after importing other related content). After the path gets available, the reference is correctly set and the node from referencesKeeper gets removed. Jahia can’t know whether these references will be resolvable in future, that’s why we do not delete them. On the other side the problem is that this list can grow and grow.

If the number of referencesKeeper nodes is growing in your environment, you need to look at the nodes and identify from the j:node reference, the j:propertyName and j:originalUuid if the reason is an unresolvable reference found in one of your import files. In that case you need to fix the repository.xml (or live-repository.xml) in the import file and delete the corresponding jnt:reference nodes manually.

We log a warning when the  number of sub-nodes of the referencesKeeper node exceeds 5000. In that case, it is necessary to clean the nodes manually.

For that please go to the JCR query tool (see JCR Data), set limit to 10000 and use the SQL-2 request:

SELECT * FROM [jnt:reference]

You could also add a where clause if you want to delete just specific nodes, for which you know that they are unresolvable, but most of the time it will be seen that all of them are unresolvable. After entering the query and the limit activate the checkbox: "Show actions". After fetching the first 10000 results, select the link: "Delete ALL", which will remove all these 10000 entries. You will have to run the query multiple times until you get rid of all entries. You should do that at low-peak times. To run it overnight you could also raise the limit, for example to 50000, (modify it in the URL: ...&limit=50000&offset=0&displayLimit=100) to remove 50000 references in one attempt.

How to configure Jahia to run behind Apache HTTP Server (httpd)

This chapter contains an overview of the Apache HTTP Server (aka “httpd”) configuration to serve as a front-end server for Jahia. Please, follow the instructions of the corresponding section, depending on chosen communication type.

Apache httpd 2.4.x with mod_proxy_*

This section is related to the configuration where the requests are proxied to the Tomcat’s HTTP connector (port 8080). The mod_proxy_http module is used in this case, so the following modules have to be enabled:

LoadModule proxy_module modules/mod_proxy.so
LoadModule proxy_http_module modules/mod_proxy_http.so
LoadModule proxy_wstunnel_module modules/mod_proxy_wstunnel.so

 

Using mod_proxy_http

 

The configuration via mod_proxy_http is as follows, for a single node:

<VirtualHost *:80>
    ServerName digital-experience-manager-server
    ProxyPreserveHost On
    ProxyPass /modules/graphqlws  ws://localhost:8080/modules/graphqlws
    ProxyPassReverse /modules/graphqlws ws://localhost:8080/modules/graphqlws 
    ProxyPass / http://localhost:8080/ connectiontimeout=20 timeout=300 ttl=120
    ProxyPassReverse / http://localhost:8080/
</VirtualHost>

For a cluster of two nodes:

<VirtualHost *:80>
    ServerName digital-experience-manager-server
    ProxyPreserveHost On

    <Proxy "balancer://jahiaWsCluster">
        BalancerMember "ws://IP_1:8080" connectiontimeout=20 timeout=300 ttl=120 min=10 max=100 loadfactor=1 route=jvmroute1
        BalancerMember "ws://IP_2:8080" connectiontimeout=20 timeout=300 ttl=120 min=10 max=100 loadfactor=2 route=jvmroute2
        Require all granted
        ProxySet lbmethod=byrequests stickysession=JSESSIONID|jsessionid
    </Proxy>
    ProxyPassMatch (^/modules/graphqlws$)  balancer://jahiaWsCluster/$1 

    <Proxy "balancer://jahiaCluster">
        BalancerMember "http://IP_1:8080" connectiontimeout=20 timeout=300 ttl=120 min=10 max=100 loadfactor=1 route=jvmroute1
        BalancerMember "http://IP_2:8080" connectiontimeout=20 timeout=300 ttl=120 min=10 max=100 loadfactor=2 route=jvmroute2
        Require all granted
        ProxySet lbmethod=byrequests stickysession=JSESSIONID|jsessionid
    </Proxy>

    ProxyPass / balancer://jahiaCluster/ 
    ProxyPassReverse / balancer://jahiaCluster/
</VirtualHost>

How to add a new node to a cluster environment by cloning an existing one?

It is possible to add a new node to a cluster environment without using the installer. This can be done by cloning an existing one.

Before proceeding to such operation, you need to ensure that the following prerequisites are met:

  • The correct version of Oracle Java is already installed
  • The default Tomcat ports are not used
  • The nodes are in the same network
  • The new node is allowed to access the Jahia database
  • Your license allows this new node with its IP address

Steps to follow:

  1. Stop completely a working node, which is not the processing one (to prevent any modification of the indexes)
  2. Create an archive of the folder "JAHIA_HOME" without the datastore, transfer the archive to the new server and uncompressed it
  3. Modify the following files:
    • JAHIA_HOME/digital-factory-config/jahia/jahia.node.properties:
    • Property "cluster.node.serverId": set it to a unique value (for example the FQDN of the server)
    • Property "processing Server": check that it's set to "false" if there is already a processing server in your cluster environment.
  4. Mount the network shared folder related to the datastore (which corresponds to the property "jahia.jackrabbit.datastore.path" of Jahia_HOME/digital-factory-config/jahia/jahia.properties)
  5. Start both the stopped and new nodes

Verifications:

  • Verify that you don't have any error in the logs
  • Go to the URL "JAHIA_URL/modules/tools/cluster.jsp" and verify that all the cluster nodes are ok
  • Create a content in Jahia and verify that it is accessible from any nodes

How to copy repository indexes to other cluster nodes?

This manual synchronization is not needed during the runtime, but could be quite useful in the following cases:

  • A cluster node was down for quite a long time (“cold standby” case) and its startup should be made fast avoiding to replay the repository changelog journal (the journal records all the content modifications on other cluster nodes, which needs to be replayed by this one to make the index up-to-date).
  • Indexes of one node are physically corrupted and need to be replaced by “healthy” indexes from another node in the cluster.
  • Assuming a full repository content re-indexing was performed on a processing node (say, during the Digital Factory upgrade process) and you would like to synchronize those indexes to other cluster members.

Please, follow the step to replicate the indexes from one node (source) to the other (target):

  1. Shut the source server down and wait for a shutdown to complete
  2. Shut the target server down
  3. Delete the indexes folders on the target server:
    <target>/digital-factory/data/repository/index
    <target>/digital-factory/data/repository/workspaces/default/index
    <target>/digital-factory/data/repository/workspaces/live/index
  4. Copy the corresponding indexes folders from the source server to the target. You could copy the content of the <source>/digital-factory/data/repository to the <target>/digital-factory/data/repository, omitting the datastore folder.
  5. Copy the file <source>/digital-factory/data/repository/revisionNode to the <target>/digital-factory/data/repository folder.
  6. You can start now the source and the target nodes.

[1] Connector settings, especially maxThreads and acceptCount values, should be adjusted accordingly to achieve high performance and scalability in production run.

[2] For production systems, the memory options should be adjusted accordingly to achieve high performance and scalability.

Jahia startup options

There are several "actions" a Jahia server could be instructed to perform right on startup. The instructions are given by creating so-called marker files (it can be empty, only its name matters) on the file system, which are detected by Jahia on startup (not during runtime) and corresponding actions are performed. The marker files are "one time" instructions, a marker file is deleted after it is detected by Jahia on startup, so that the "actions" are performed once and not on consequent Jahia restarts.

In the following sections, available markers, the corresponding actions and possible use cases for their usage are described.

Indexing startup options

The following marker files instruct Jahia to perform the described actions on startup. The locations may vary on your Jahia environment, but the markers should be located in the JCR repository home folder, which is configured via jahia.jackrabbit.home in your jahia.properties file and by default is located at digital-factory-data/repository:

  • <jahia.jackrabbit.home>/reindex - instructs Jahia to perform the full JCR content repository re-indexing on next startup.
  • <jahia.jackrabbit.home>/index-fix - tells Jahia to perform the JCR content indexes consistency check and repair on next startup.
  • <jahia.jackrabbit.home>/index-check - a consistency check (no repair, no changes) will be performed for JCR content indexes on next Jahia startup.

Environment and cluster startup options

There are several markers for startup options, which should help you in case of, for example, cloning of production environment (with later restore at another location, say pre-production one) or if you are using the Rolling upgrade Jahia feature.

Disable mail service option

The following marker file, when created, forces Jahia to disable mail service (if it was active): <digital-factory-data>/disable-mail-service

This option is mainly used as part of another instruction in case of a safe clone of a Jahia environment into another location.

Reset cluster discovery info

The following marker file, when created, instructs the server to do clean on startup the cluster membership information, which makes the Jahia re-discover the cluster members again. In effect, it does the following:

  • purges the JGROUPSPING database table data, which is responsible for the cluster membership discovery. This avoids the restored Jahia instance trying to connect to the "source" cluster nodes (in case of a clone of an environment, it is usually the "source" production cluster)
  • the deletion of the discovery.config file (located under <digital-factory-data>/bundles-deployed/... folder) where the clustering component (Hazelcast) stores the membership discovery information

This option is mainly used as part of another instruction in case of a safe clone of a Jahia environment into another location or rolling upgrade procedure.

The marker file for this option is: <digital-factory-data>/reset-discovery-info

Use cases for Jahia startup options

The following sections show the usage of startup options for various scenarios.

Safe environment clone (aka Safe backup restore)

The procedure is mainly used, in case you would like to create a copy (snapshot) of your production clustered Jahia environment and restore it at another location (say, pre-production or test server).

The following marker is used in this case <DX_RESTORE_DIR>/digital-factory-data/safe-env-clone, which instructs Jahia to perform the reset of the cluster discovery info and disabling of mail service (see sections above for description of those actions). 

This marker must be set on each cluster node. 

Rolling upgrade

The procedure for the Jahia  Rolling upgrade feature requires a reset of the cluster discovery information for a cluster node during the upgrade procedure.

A dedicated marker file, <digital-factory-data>/rolling-upgrade performs does that action. See Reset cluster discovery info for the details.