Upgrading to jCustomer 3.x

November 19, 2025

While jCustomer 2.x was compatible with Elasticsearch 7x, jCustomer 3.0.0 introduces support for Elasticsearch 9x (and drops support for Elasticsearch 7x).

This document details the migration steps from jCustomer 2x to jCustomer 3x and the associated Elasticsearch migration.

Two migration strategies are available:

  • Using Elasticsearch remote reindex API.
  • Using Elasticsearch snapshot-based migration.

We do recommend using the Elasticsearch remote reindex API strategy.

If you are currently running jCustomer in Jahia Cloud, the migration will be handled by our teams.

Elasticsearch remote reindex API strategy

You can use the remote reindex API to upgrade directly from Elasticsearch 7 to Elasticsearch 9. This approach runs both clusters in parallel and uses Elasticsearch's remote reindex feature.

This upgrade relies on a script that was specifically built taking jCustomer in consideration, if you are sharing the Elasticsearch instance with other projects, it might need to be adjusted.

The script is available in the Unomi github repository: migration_es7-es9.sh and handles:

  • Regular indices and rollover indices with their aliases
  • ILM policies migration
  • Data reindexing from ES7 to ES9
  • Validation and comparison reporting

Prerequisites

  • bash shell
  • jq command-line JSON processor
  • curl for HTTP requests
  • Access to both ES7 (source) and ES9 (destination) clusters
  • ES9 must have reindex.remote.whitelist configured (see configuration below)

Install jq if not already installed:

# macOS
brew install jq

# Linux
apt-get install jq
# or
yum install jq

Elasticsearch 9 Remote Reindex Configuration

Before running the script, you must configure the remote reindex whitelist on your ES9 cluster. Add this to your elasticsearch.yml configuration file:

reindex.remote.whitelist: "your-es7-host:9200"

Script Configuration

The script uses environment variables for configuration. Export variables before running the script:

export ES7_HOST="http://your-es7-host:9200"
export ES7_USER="elastic"
export ES7_HOST_FROM_ES9="http://your-es7-host-viewed-from-es9:9200"
export ES7_PASSWORD="your-es7-password"

export ES9_HOST="http://your-es9-host:9200"
export ES9_USER="elastic"
export ES9_PASSWORD="your-es9-password"

export INDEX_PREFIX="context-"
export BATCH_SIZE="1000"

Configuration Variables:

| Variable | Description | Default | |:---------|:----------------------------------|:----------------------| | ES7_HOST | Elasticsearch 7 URL | http://localhost:9200 | | ES7_HOST_FROM_ES9 | Elasticsearch 7 URL visible from Elasticsearch 9 | value of ES7_HOST | | ES7_USER | ES7 username | elastic | | ES7_PASSWORD | ES7 password | password | | ES9_HOST | Elasticsearch 9 URL | http://localhost:9201 | | ES9_USER | ES9 username | elastic | | ES9_PASSWORD | ES9 password | password | | INDEX_PREFIX | Prefix for index names | context- | | BATCH_SIZE | Reindex batch size | 1000 |

Execution

Make the script executable and run it:

chmod +x migration_es7-es9.sh
./migration_es7-es9.sh

What the Script Does

  • Discovers indices matching the configured patterns on ES7
  • Collects source statistics (document count, size) for each index
  • Migrates ILM policies from ES7 to ES9 if they exist
  • Creates indices on ES9 with the same settings and mappings
  • Recreates aliases with proper write index flags for rollover indices
  • Reindex data from ES7 to ES9 using the remote reindex API
  • Collects destination statistics after migration
  • Displays comparison report showing document counts and any mismatches

Output

The script provides detailed logging with timestamps and a final comparison report:

==========================================
MIGRATION COMPARISON REPORT
==========================================
Index                                    |    Source Docs |      Dest Docs |     Difference |     Status
-----------------------------------------+----------------+----------------+----------------+-----------
context-profile                          |          15420 |          15420 |             +0 |      ✓ OK
context-session-000001                   |           3420 |           3420 |             +0 |      ✓ OK
==========================================
✓ All indices migrated successfully!
==========================================

Elasticsearch snapshot-based migration

Although more complex, this migration strategy might be suited for environments sharing the Elasticsearch instance between multiple use-cases (other than jCustomer).

About Elasticsearch migrations

⚠️ IMPORTANT ⚠️: Make sure to review and understand elements present in this documentation before proceeding with migrating your environment.

A large portion of the migration is directly associated with upgrading Elasticsearch from v7 to v8, and then v8 to v9, operation likely to be very closely tied to your own current setup.

Before proceeding, make sure to review and understand the official Elasticsearch upgrade documentation, available here:

The procedure detailed below focus on a "standard" path, although likely to apply in most situations, it will need to be adjusted based on your environment and does not aim to replace Elasticsearch official upgrade documentation.

Migration overview

The recommended approach for the migration is to adopt a Blue-green migration strategy, in which the migration is performed in a clone of your production environment. Once the migration is confirmed to be successful, traffic is redirected to the cloned environment at the end of the migration, and the previous production environment is decommissioned.

The migration steps will be as follow:

  1. Create a clone of your production (Elasticsearch + Jahia + jCustomer)
  2. Upgrade Elasticsearch v7 to v8, then v8 to v9
  3. Upgrade jCustomer to 3.0.0
  4. Validate the migration
  5. Redirect traffic to your cloned environment.
  6. Migration is complete, your cloned environment is now in production.
  7. Decommission the "old" production environment.

This strategy allows for minimum downtime, but will also mean that any data captured between the creation of the clone and the end of the migration (i.e. the data that would still be going to your original environment) will be lost.

With such a migration strategy, there is no need to perform a backup, two environments will be running in parallel until the very end, with the ability to roll-back if needed.

Conventions

This migration procedure will provide a set of operations to be executed, make sure to follow them precisely.

All values between < > must be replaced with the real values for your environment (for example: <elasticsearch-url>, <elastic-user>, ...).

Most HTTPS calls below will be using CURL, but you can use the querying tool of your choice (curl, PostMan, Insomnia, ...).

Pre-conditions

Use latest versions of Jahia products

Before starting the migration, make sure to update your environment to:

  • Jahia 8.2.2.0 (or above)
  • jExperience 3.7.0 (or above, but compatible with jCustomer 2x)
  • jCustomer 2.6.2 (or any more recent 2.x version)
  • Elasticsearch 7.17+ ( Must be on Elasticsearch 7.17+ )

Upgrade from jCustomer 1.x is NOT supported, you MUST upgrade first to jCustomer 2.x.

jExperience 3.7.0 is compatible with both jCustomer 2x and jCustomer 3x, facilitating the migration process.

In future releases (whose exact time is currently not known), compatibility with jCustomer 2x will be dropped. In such a case, ensure to upgrade to the latest version of jExperience compatible with jCustomer 2x before the migration, and then, once the migration to jCustomer 3x is complete, upgrade to the latest version of jExperience compatible with jCustomer 3x.

Check the upgrade assistant status

Using Elasticsearch upgrade assistant, make sure there are no errors that would prevent a successful migration.

Use the following call to check the overall upgradibility status (replace kibana-url, kibana-user and kibana-password):

curl -X GET "<kibana-url>/api/upgrade_assistant/status" \
  -u "<kibana-user>:<kibana-password>" \
  -H 'kbn-xsrf: true' \
  -H 'Accept: application/json'

Check if there are particular errors on the existing indices:

curl -X GET "<elasticsearch-url>/_migration/deprecations" \
  -u "<elastic-user>:<elastic-password>" \
  -H 'Accept: application/json' | jq .

Any errors at this stage shouldn't be purely related to jCustomer, make sure to fix these before proceeding with the migration.

Green cluster state

Do not start the migration until your cluster is reporting 🟩GREEN🟩 health status

Collect and validate key metrics

The migration will be modifying ES indices and documents as data is transferred/modified, but at the end of the migration, you should find back the data you started with.

Begin by collecting metrics by performing the operations details in this section, you will then be asked to perform the same operations:

  • After Elasticsearch 8 migration
  • After Elasticsearch 9 migration

If all of the checks below are successful, you can consider the Elasticsearch migration is complete.

Depending of the stage at which you are performing the check, you might need to provide authentication, using -u "<elastic-user>:<elastic-password>" in the curl command.

Check that a live site is submitting events

Before beginning the migration and at the very end of the migration, perform the following quick sanity check:

  • Using a web browser, open up a live site currently using jExperience
  • In the browser console, verify the call to context.json returns HTTP code 200
  • Check if there are errors in the jCustomer logs at that point.

Note: This operation is only possible when running Elasticsearch 7 or Elasticsearch 9.

Check that the cluster is Green

Execute the following call:

curl '<elasticsearch-url>/_cluster/health?pretty'

Expect status = green.

Note that if you are running a single-node Elasticsearch cluster (for example in development), the status will remain YELLOW.

Check indices and document counts

Execute the following call:

curl '<elasticsearch-url>/_cat/indices/<index-prefix>*?v'

After a migration, make sure all pre-migration indices are still present and their docs.count column and ensure the number of documents match the metrics collected before starting the migration.

Check aliases

Execute the following call:

curl '<elasticsearch-url>/_cat/aliases/<INDEX_PREFIX>*?v'

After a migration event and session indices must have the same logical aliases as before.

Mapping Sanity
curl 'http://localhost:9200/<INDEX_NAME>/_mapping'

After an upgrade, you should perform a quick sanity check on indices mappings. Mappings content is partially dynamically generated based on your content, so it is impossible to detail its exact content (or to build a tool to automatically validate its content).

At a minimum, check in the indices the presence of the "dynamic_templates" properties defined here: https://github.com/apache/unomi/tree/master/persistence-elasticsearch/core/src/main/resources/META-INF/cxs/mappings

These properties are added automatically and their presence is a good sign to indicate that all is working as expected.

Migration to Elasticsearch 8

The first migration step consists in upgrading Elasticsearch from 7.17x to 8.19+ (migrate to the most recent Elasticsearch 8.x version available).

Create a Snapshot on Elasticsearch 7.17

Register the snapshot repository (path must exist and be writable; use an absolute path if running outside Docker):

curl -X PUT '<elasticsearch-url>/_snapshot/snapshots_repository' \
  -u "<elastic-user>:<elastic-password>" \
  -H 'Content-Type: application/json' \
  -d '{
    "type": "fs",
    "settings": {
      "location": "snapshots"
    }
  }'

Create the snapshot (sample name: es7_migration_2025):

curl -X PUT '<elasticsearch-url>/_snapshot/snapshots_repository/es7_migration_2025?wait_for_completion=true' \
  -u "<elastic-user>:<elastic-password>"
  -H 'Content-Type: application/json' \

Verify the creation of the snapshot:

curl '<elasticsearch-url>/_snapshot/snapshots_repository/es7_migration_2025' \
  -u "<elastic-user>:<elastic-password>"
  -H 'Content-Type: application/json' \

You should expect a response looking like this:

{
    "snapshots": [
        {
            "snapshot": "<name-of-the-snapshot>",
            "uuid": "<dynamic-id>",
            "repository": "snapshots_repository",
            "version_id": <version-id>, // 7172900 is 7.17.29 version
            "version": "<compatibility-version>", 
            "indices": [
                "<index-prefix>-event-000001",
                "<index-prefix>-session-000001",
                "<index-prefix>-systemItems",
                "<index-prefix>-personaSession",
                "<index-prefix>-profileAlias",
                "<index-prefix>-profile",
                ...
            ],
            ...
            "state": "SUCCESS",
            <snapshot-information>

Finally, create an archive (such as zip) of the snapshots_repository folder.

Install and configure Elasticsearch 8.19+

Install an Elasticsearch 8.19+ instance from the official past releases page: https://www.elastic.co/downloads/past-releases/elasticsearch-8-19-3

In elasticsearch.yml change the following values:

path:
  repo:
    - <absolute-path-to>/snapshots_repository

xpack.security.enabled: false
xpack.security.http.ssl.enabled: false

IMPORTANT: This is only for the migration, do NOT disable security in production

Load/migrate data into Elasticsearch 8.19+

Next, unzip the snapshots_repository folder saved earlier into Elasticsearch's root folder.

Once done, pre-register the repository:

curl -X PUT '<elasticsearch-url>/_snapshot/snapshots_repository' \
  -H 'Content-Type: application/json' \
  -d '{
    "type": "fs",
    "settings": {
      "location": "snapshots"
    }
  }'

Once done, inspect the snapshot:

curl '<elasticsearch-url>/_snapshot/snapshots_repository/es7_migration_2025' \
  -H 'Content-Type: application/json' \

You should expect a response looking like this:

{
    "snapshots": [
        {
            "snapshot": "<name-of-the-snapshot>",
            "uuid": "<dynamic-id>",
            "repository": "snapshots_repository",
            "version_id": <version-id>, // 7172900 is 7.17.29 version
            "version": "<compatibility-version>", 
            "indices": [
                "<index-prefix>-event-000001",
                "<index-prefix>-session-000001",
                "<index-prefix>-systemItems",
                "<index-prefix>-personaSession",
                "<index-prefix>-profileAlias",
                "<index-prefix>-profile",
                ...
            ],
            ...
            "state": "SUCCESS",
            <snapshot-information>

Next, restore the indices. To avoid restoring additional data, you can provide the index prefix (<index-prefix>) you were using for jCustomer:

curl -X POST 'http://localhost:9200/_snapshot/snapshots_repository/es7_migration_2025/_restore?wait_for_completion=true' \
  -H 'Content-Type: application/json' \
  -d '{
    "indices": "<index-prefix>-*",
    "ignore_unavailable": true,
    "include_global_state": false,
    "index_settings": {
      "index.blocks.write": false
    }
  }'

Once done, execute the following script to upgrade internal structures / mappings and reindex the datas.

Create a file/script called reindex_preserve_mapping.sh with the following content:

#!/bin/bash

#set -e

if [ -z "$ES_URL" ]; then
  echo "ERROR: Environment variable ES_URL is not defined."
  echo "Example: export ES_URL=http://localhost:9200"
  exit 1
fi

if [ -z "$ES_PREFIX" ]; then
  echo "ERROR: Environment variable ES_PREFIX is not defined."
  echo "Example: export ES_PREFIX=context-"
  exit 1
fi

echo "Checking connection to Elasticsearch..."
RESPONSE=$(curl -s "$ES_URL" )
echo "Elasticsearch response: $RESPONSE"
if echo "$RESPONSE" | jq -e 'has("error")' | grep -q true; then
  echo "ERROR: Unable to connect to Elasticsearch"
  exit 1
fi
echo "Connection to Elasticsearch OK."

# Function to save aliases of an index
save_aliases() {
  local index_name=$1
  local aliases_response=$(curl -s -X GET "$ES_URL/$index_name/_alias")

  # Extract aliases for this specific index
  local index_aliases=$(echo "$aliases_response" | jq -r ".[\"$index_name\"].aliases // {} | keys[]" 2>/dev/null || echo "")

  if [ -n "$index_aliases" ] && [ "$index_aliases" != "null" ]; then
    echo "$index_aliases"
  else
    echo ""
  fi
}

# Function to restore aliases
restore_aliases() {
  local index_name=$1
  local aliases_list=$2

  if [ -n "$aliases_list" ]; then
    echo "==> Restoring aliases for $index_name"
    for alias_name in $aliases_list; do
      echo "  -> Creating alias: $alias_name"
      create_alias_result=$(curl -s -w "\n%{http_code}" -X PUT "$ES_URL/$index_name/_alias/$alias_name")
      code=$(echo "$create_alias_result" | tail -n1)
      if [ "$code" -ne 200 ] && [ "$code" -ne 201 ]; then
        echo "❌ ERROR creating alias $alias_name: $(echo "$create_alias_result" | sed '$d')"
      else
        echo "✅ Alias $alias_name created successfully"
      fi
    done
  fi
}

# Get list of indices starting with the prefix
indices=($(curl -s "$ES_URL/_cat/indices?h=index" | grep "^$ES_PREFIX"))
if [ ${#indices[@]} -eq 0 ] || [ -z "${indices[0]}" ]; then
  echo "No indices found with prefix '$ES_PREFIX'"
  exit 0
fi

for index in "${indices[@]}"; do
  echo "==> Checking existence of index $index"
  exists=$(curl -s -o /dev/null -w "%{http_code}" "$ES_URL/$index")
  if [ "$exists" -ne 200 ]; then
    echo "❌ Index $index does not exist, skipping to next one."
    continue
  fi

  echo "==> Saving mapping and settings for $index"
  mapping=$(curl -s -X GET "$ES_URL/$index/_mapping")
  settings=$(curl -s -X GET "$ES_URL/$index/_settings" | \
    jq ".[\"$index\"].settings | del(.index.uuid, .index.provided_name, .index.creation_date, .index.version, .index.frozen, .index.search, .index.routing, .index.blocks, .index.creation_date_string)")

  # Save aliases before deletion
  echo "==> Saving aliases for $index"
  saved_aliases=$(save_aliases "$index")
  if [ -n "$saved_aliases" ]; then
    echo "  -> Aliases found: $saved_aliases"
  else
    echo "  -> No aliases found"
  fi

  echo "==> Getting document count for $index"
  doc_count=$(curl -s "$ES_URL/$index/_count" | jq -r .count)

  tmp_index="${index}_tmp_reindex"

  if [ "$doc_count" -eq 0 ]; then
    echo "💡 $index is empty, will just recreate it identically"
    echo "==> Deleting $index"
    del1=$(curl -s -w "\n%{http_code}" -X DELETE "$ES_URL/$index")
    code=$(echo "$del1" | tail -n1)
    if [ "$code" -ne 200 ]; then
      echo "❌ ERROR deleting $index: $del1"
      exit 1
    fi

    echo "==> Recreating $index with original mapping/settings"
    create_final=$(curl -s -w "\n%{http_code}" -X PUT "$ES_URL/$index" -H 'Content-Type: application/json' -d "{
      \"settings\": $settings,
      \"mappings\": $(echo $mapping | jq ".\"$index\".mappings")
    }")
    body=$(echo "$create_final" | sed '$d')
    code=$(echo "$create_final" | tail -n1)
    if [ "$code" -ne 200 ] && [ "$code" -ne 201 ]; then
      echo "❌ ERROR creating $index: $body"
      exit 1
    fi

    # Restore aliases
    restore_aliases "$index" "$saved_aliases"

    echo "✅ Empty index $index recreated successfully"
  else
    echo "==> Creating temporary index $tmp_index"
    create_tmp=$(curl -s -w "\n%{http_code}" -X PUT "$ES_URL/$tmp_index" -H 'Content-Type: application/json' -d "{
      \"settings\": $settings,
      \"mappings\": $(echo $mapping | jq ".\"$index\".mappings")
    }")
    body=$(echo "$create_tmp" | sed '$d')
    code=$(echo "$create_tmp" | tail -n1)
    if [ "$code" -ne 200 ] && [ "$code" -ne 201 ]; then
      echo "❌ ERROR creating $tmp_index: $body"
      exit 1
    fi

    echo "==> Reindexing $index => $tmp_index"
    reindex1=$(curl -s -w "\n%{http_code}" -X POST "$ES_URL/_reindex" -H 'Content-Type: application/json' -d "{
      \"source\": { \"index\": \"$index\" },
      \"dest\": { \"index\": \"$tmp_index\" }
    }")
    body=$(echo "$reindex1" | sed '$d')
    code=$(echo "$reindex1" | tail -n1)
    if [ "$code" -ne 200 ] && [ "$code" -ne 201 ]; then
      echo "❌ ERROR reindexing: $body"
      exit 1
    fi

    echo "==> Waiting for reindex to complete..."
    sleep 2

    echo "==> Checking document count in $tmp_index"
    new_doc_count=$(curl -s "$ES_URL/$tmp_index/_count" | jq -r .count)
    if [ "$new_doc_count" -ne "$doc_count" ]; then
      echo "❌ ERROR: Different document count ($doc_count vs $new_doc_count)"
      exit 1
    fi

    echo "==> Deleting original index $index"
    del2=$(curl -s -w "\n%{http_code}" -X DELETE "$ES_URL/$index")
    code=$(echo "$del2" | tail -n1)
    if [ "$code" -ne 200 ]; then
      echo "❌ ERROR deleting $index: $del2"
      exit 1
    fi

    echo "==> Reindexing $tmp_index => $index"
    reindex2=$(curl -s -w "\n%{http_code}" -X POST "$ES_URL/_reindex" -H 'Content-Type: application/json' -d "{
      \"source\": { \"index\": \"$tmp_index\" },
      \"dest\": { \"index\": \"$index\" }
    }")
    body=$(echo "$reindex2" | sed '$d')
    code=$(echo "$reindex2" | tail -n1)
    if [ "$code" -ne 200 ] && [ "$code" -ne 201 ]; then
      echo "❌ ERROR final reindex: $body"
      exit 1
    fi

    echo "==> Waiting for final reindex to complete..."
    sleep 2

    echo "==> Deleting temporary index $tmp_index"
    del3=$(curl -s -w "\n%{http_code}" -X DELETE "$ES_URL/$tmp_index")
    code=$(echo "$del3" | tail -n1)
    if [ "$code" -ne 200 ]; then
      echo "❌ ERROR deleting $tmp_index: $del3"
    fi

    # Restore aliases after recreation
    restore_aliases "$index" "$saved_aliases"

    echo "✅ $index reindexed successfully ($doc_count documents)"
  fi

  echo ""
done

echo "🎉 Reindexing completed for all indices with prefix '$ES_PREFIX'"

Then execute it using the following commands:

export ES_URL=<elasticsearch-url>
export ES_PREFIX=<elasticsearch-prefix>
bash reindex_preserve_mapping.sh
Validate the migration

Now is the time to collect metrics detailed in the "Collect and validate key metrics" section.

Make sure the recorded values are identical before/after the migration.

If this is the case, you can proceed with to the next steps.

Create a Snapshot on Elasticsearch 8.19+

We will now repeat the step we performed earlier, but for this new version of Elasticsearch.

Create the snapshot (sample name: es8_migration_2025):

curl -X PUT '<elasticsearch-url>/_snapshot/snapshots_repository/es8_migration_2025?wait_for_completion=true' \
  -u "<elastic-user>:<elastic-password>"
  -H 'Content-Type: application/json' \

Verify the creation of the snapshot:

curl '<elasticsearch-url>/_snapshot/snapshots_repository/es8_migration_2025' \
  -u "<elastic-user>:<elastic-password>"
  -H 'Content-Type: application/json' \

You should expect a response looking like this:

{
    "snapshots": [
        {
            "snapshot": "<name-of-the-snapshot>",
            "uuid": "<dynamic-id>",
            "repository": "snapshots_repository",
            "version_id": <version-id>, // 8190300 is 8.18.03 version
            "version": "<compatibility-version>", 
            "indices": [
                "<index-prefix>-event-000001",
                "<index-prefix>-session-000001",
                "<index-prefix>-systemItems",
                "<index-prefix>-personaSession",
                "<index-prefix>-profileAlias",
                "<index-prefix>-profile",
                ...
            ],
            ...
            "state": "SUCCESS",
            <snapshot-information>
            ...
        }
    ],
    "total": 1,
    "remaining": 0
}

Finally, create an archive (such as zip) of the snapshots_repository folder.

Migration to Elasticsearch 9

Install and configure Elasticsearch 9.1.3+

Install the latest version of Elasticsearch 9x (9.1.3 or more recent), it can be downloaded from the official past releases page: https://www.elastic.co/downloads/past-releases/elasticsearch-9-1-3

In elasticsearch.yml change the following values:

path:
  repo:
    - <absolute-path-to>/snapshots_repository

xpack.security.enabled: false
xpack.security.http.ssl.enabled: false

IMPORTANT: This is only for the migration, do NOT disable security in production

Load/migrate data into Elasticsearch 9.1.3+

Next, unzip the snapshots_repository folder saved earlier into Elasticsearch's root folder.

Once done, pre-register the repository:

curl -X PUT '<elasticsearch-url>/_snapshot/snapshots_repository' \
  -H 'Content-Type: application/json' \
  -d '{
    "type": "fs",
    "settings": {
      "location": "snapshots"
    }
  }'

Once done, inspect the snapshot:

curl '<elasticsearch-url>/_snapshot/snapshots_repository/es8_migration_2025' \
  -H 'Content-Type: application/json' \

You should expect a response looking like this:

{
    "snapshots": [
        {
            "snapshot": "<name-of-the-snapshot>",
            "uuid": "<dynamic-id>",
            "repository": "snapshots_repository",
            "version_id": <version-id>, // 8190300 is 8.18.03 version
            "version": "<compatibility-version>", 
            "indices": [
                "<index-prefix>-event-000001",
                "<index-prefix>-session-000001",
                "<index-prefix>-systemItems",
                "<index-prefix>-personaSession",
                "<index-prefix>-profileAlias",
                "<index-prefix>-profile",
                ...
            ],
            ...
            "state": "SUCCESS",
            <snapshot-information>
            ...
        }
    ],
    "total": 1,
    "remaining": 0
}

Next, restore the indices, to avoid restoring additional data, you can provide the index prefix you were using for jCustomer:

curl -X POST 'http://localhost:9200/_snapshot/snapshots_repository/es8_migration_2025/_restore?wait_for_completion=true' \
  -H 'Content-Type: application/json' \
  -d '{
    "indices": "<index-prefix>-*",
    "ignore_unavailable": true,
    "include_global_state": false,
    "index_settings": {
      "index.blocks.write": false
    }
  }'

Once done, execute the following script to upgrade internal structures / mappings and reindex the data.

This is the same script than for the Elasticsearch 8 migration.

export ES_URL=<elasticsearch-url>
export ES_PREFIX=<elasticsearch-prefix>
bash reindex_preserve_mapping.sh
Validate the migration

Now is the time to collect metrics detailed in the "Collect and validate key metrics" section.

Make sure the recorded values are identical before/after the migration.

If this is the case, you can proceed with to the next steps.

Post-migration hardening

Once the migration is successful, make sure to re-enable security settings in Elasticsearch, in particular:

  • Re-enable xpack.security.enabled and TLS as per production standards.
    • After enabling, verify authentication/authorization is required to access Elasticsearch.
  • Create users/roles/API keys required by jCustomer.
  • Restrict snapshot repository paths appropriately.

These steps are not specifically related to the migration.

Aside from settings and permissions, make sure to remove obsolete archived snapshot directories, you can remove snapshots using the following command:

curl --location --request DELETE 'http://localhost:9200/_snapshot/snapshots_repository/es8_migration_2025'

Upgrade jCustomer

Now that your migration to Elasticsearch 9 is complete, you need to upgrade to jCustomer 3.0.0+.

JDK 17

While jCustomer 2x was supporting JDK 11, jCustomer 3x requires OpenJDK/OracleJDK 17.

If you are not running with jCustomer Docker images (which ship with OpenJDK 17), make sure to upgrade your system to JDK 17.

Deprecated properties

A few of the properties were deprecated, you can remove them from jCustomer configuration in the file custom.system.properties.

org.apache.unomi.elasticsearch.monthlyIndex.nbShards
org.apache.unomi.elasticsearch.monthlyIndex.nbReplicas
org.apache.unomi.elasticsearch.monthlyIndex.indexMappingTotalFieldsLimit
org.apache.unomi.elasticsearch.monthlyIndex.indexMaxDocValueFieldsSearch
org.apache.unomi.elasticsearch.monthlyIndex.itemsMonthlyIndexedOverride

These properties have been replaced by rollover-based indexing configurations (see dedicated section in the configuration), you might need to adjust these according to your previous configuration.

In jCustomer 2.x the monthly index settings were only used if rollover settings were absent.

Update value formats

The format of some properties have been updated in Elasticsearch, the new values do not specify the unit anymore.

Before:

org.apache.unomi.elasticsearch.bulkProcessor.bulkSize=5MB
org.apache.unomi.elasticsearch.bulkProcessor.flushInterval=<yourvalue>s

After:

org.apache.unomi.elasticsearch.bulkProcessor.bulkSize=5
org.apache.unomi.elasticsearch.bulkProcessor.flushInterval=<yourvalue>

Adjust Elasticsearch compatibility range

jCustomer 3.0.0 requires Elasticsearch 9+, update the following values:

minimalElasticsearchVersion=9.0.3
maximalElasticsearchVersion=10.0.0

Make sure the Elasticsearch instance jCustomer is pointing to was indeed updated to a version matching the range above.

Validate the new environment

The migration is almost complete, you should now validate that your new jCustomer is now operating properly.

You should have done it as part of the pre-conditions, but if not, make sure to update jExperience to 3.7.0

Begin by making sure there are no error in jCustomer logs, in particular related to the connection to jCustomer.

And finally make sure your environment is working properly by visiting a live site and :

  • Open up the browser console, make sure the call to context.json returns HTTP code 200
  • Check jCustomer logs to ensure there are no new errors.

Redirect production traffic to the new environment

If you did not encounter issues in the previous steps, you can now redirect your production traffic to the new environment.

Keep the old environment available as a backup for a couple of days as it will be your only mean of performing a roll-back may you notice a major issue later on.

Migration is now complete 🎉