Migrate from Replicated to another runtime
This topic describes how to migrate Terraform Enterprise instances from Replicated to another runtime environment.
Overview
The target runtime must host instances on the same version of Terraform Enterprise as the instances hosted in Replicated. You cannot combine an application upgrade and migration. Upgrade your current installation of Terraform Enterprise to the latest version before proceeding with your migration. Use the same procedure to migrate from Replicated to all other supported runtimes:
- Back up your Terraform Enterprise data tier, which includes the PostgreSQL database and object storage.
- Upgrade your existing Replicated installation to
v202309-1
or later. - Deploy the same version of Terraform Enterprise to your new runtime.
- Migrate your data tier to your new installation.
The migration paths are specific to the operational mode of your existing Terraform Enterprise installation and the runtime environment you want to migrate to.
Docker:
- Refer to the Migrate Terraform Enterprise from Replicated to Docker Engine video for a comprehensive walkthrough that describes how to migrate to a Docker runtime.
- Migrating using mounted disk operational mode to Docker runtime
- Migrating using external services operational mode to Docker runtime
Kubernetes
Podman
- The procedure for migrating to Podman is the same for
disk
andexternal
operational modes.
- The procedure for migrating to Podman is the same for
Nomad
Prerequisites
Back up data tier
We always recommend backing up your data tier before conducting maintenance or upgrade operations. The backup method will depend on your existing installation.
- For mounted disk installations, refer to Backup a Mounted Disk Deployment in the Terraform Enterprise backup tutorial for recommended patterns.
- For external services or active active installations, refer to Object Store and Database in the Terraform Enterprise backup tutorial for recommended patterns.
In both cases, we recommend backing up the environment variables of the Terraform Enterprise Docker container so that you can refer to them when configuring your target runtime.
$ docker exec terraform-enterprise env > env.txt
Upgrade your existing Terraform Enterprise installation
- Upgrade your Replicated-hosted Terraform Enterprise installation to v202309-1 or later.
- Run the following commands to validate that the upgrade completed successfully:
- Validate the release version.
$ replicatedctl app inspect
- Check that Replicated has started.
$ replicatedctl app status
- Check that the docker containers are up.
$ sudo docker ps
- Do a health check.
$ tfe-admin health-check
- Do a
terraform plan
and aterraform apply
.
- Validate the release version.
Mounted disk to Docker
Before proceeding with this migration guide, make sure you meet all the prerequisites and that a Flexible Deployment Options license file has been provided by your HashiCorp business partner. Do not proceed with this guide if any of the prerequisites are not fulfilled.
If at any point you need to revert your settings, see the rollback steps.
Migration steps
Step 1: Backup your data tier
We always recommend backing up your data tier before conducting any maintenance or migration. See the backup data guide under the prerequisites section for a detailed guide on how to move forward with the backup process.
Step 2: Upgrade Terraform Enterprise
Upgrade your Replicated-hosted Terraform Enterprise installation to v202309-1 or later. Refer to Upgrading for instructions.
Step 3: Verify Docker Engine version
Docker should already be installed for you because Replicated is installed on this host. You may still need to install Docker Compose if you have not already. Refer to Docker Engine Requirements for more information.
If you cannot upgrade Docker Engine or install in a supported Docker configuration then install and migrate Terraform Enterprise to a different runtime.
Step 4: Generate Docker Compose configuration
Docker Compose configurations generated on releases older than v202410-1 may contain formatting errors. Terraform Enterprise may format passwords or secrets that contain special characters incorrectly if your configuration contains double quotation marks instead of single quotation marks.
To address this issue, you can manually replace double quotation marks with single quotation marks around secrets in your compose configuration.
We fixed this issue in v202410-1. Refer to the v202410-1 release notes for additional information.
To convert your existing configuration into a Docker Compose format on your current Terraform Enterprise installation (Replicated), follow these steps:
- Create a
/etc/terraform-enterprise
directory. - Generate the Docker Compose configuration and save it to
/etc/terraform-enterprise/docker-compose.yml
using the following command:
sudo docker exec terraform-enterprise tfectl app config --format docker > /etc/terraform-enterprise/docker-compose.yml
Your saved output should resemble the configuration file in the example disk
mode configuration. Incorporate the saved output values into this configuration file to ensure correctness.
Terraform Enterprise may generate a configuration that contains errors. Manually review the configuration to verify that it is complete and accurate.
Step 5: Prepare the host and install
To make migration smoother and faster, we recommend using the same host as your current Replicated instance.
Note: If you want to use a separate host for your new docker-based Terraform Enterprise, we can provide alternative steps. Reach out to support for assistance.
Review the configuration file from the previous step. Update any values as needed before moving on to the next step. Pay special attention to placeholder values enclosed in
<>
, such asimage
andTFE_LICENSE
, and replace them with your actual values.Alternatively, you can use the mounted disk example as a starting point and adjust it to fit your environment.
Note that the volumes
type
fields are set tobind
for thetfe
service. You can source many of your required values from the Replicated application configuration backup you created earlier. For a comprehensive list of configuration settings, refer to the Configuration Reference.Log in to the Terraform Enterprise container image registry.
$ cat <PATH_TO_HASHICORP_LICENSE_FILE> | docker login --username terraform images.releases.hashicorp.com --password-stdin
Pull the Terraform Enterprise image from the registry.
$ docker pull images.releases.hashicorp.com/hashicorp/terraform-enterprise:<vYYYYMM-#>
Create a new
systemd
service for Terraform Enterprise by creating a/etc/systemd/system/terraform-enterprise.service
file with the contents on the Docker Installation guide. Update this systemd unit file if any of the following are true:- The name or the path of the
docker-compose.yml
has changed. - The path to the Docker binary and the command to invoke
docker compose
is different because Docker Engine 1.13.1 is installed.
- The name or the path of the
Step 6: Stop Replicated and migrate
Next, you can stop Replicated and migrate your Terraform Enterprise installation to Docker.
SSH into your Terraform Enterprise (Replicated) instance.
Stop Terraform Enterprise (Replicated). Ensure the application has fully stopped before proceeding.
$ replicatedctl app stop
Back up your data. If you have already backed up your data proceed forward. If not, or if you want another backup for safe keeping, do the following.
Retrieve your mounted disk path.
$ replicatedctl app-config export --template '{{ .disk_path.Value }}' | tr -d '\r'
We will refer to this path as the
${DISK_PATH}
going forward. Next, archive your mounted disk data.$ tar -zcvf data.tar.gz -C ${DISK_PATH} aux postgres
Copy your
data.tar.gz
archive to a safe place.
Start (and enable on start up) the Docker Compose based Terraform Enterprise.
$ systemctl enable --now terraform-enterprise
Check the status of your service with
systemctl status terraform-enterprise
, or usedocker ps
to find your container's name then rundocker logs [name]
. You can also runcurl https://[hostname]/_health_check
to check the health check endpoint. Terraform Enterprise should now be running using Docker Compose, so your Replicated services can be shut down and disabled.Shut down Replicated.
$ systemctl disable --now replicated replicated-ui replicated-operator
Next, stop and remove the unnecessary Replicated containers.
$ docker stop replicated-premkit
$ docker stop replicated-statsd
$ docker rm -f replicated replicated-ui replicated-operator replicated-premkit replicated-statsd retraced-api retraced-processor retraced-cron retraced-nsqd retraced-postgres
Note: some of the docker stop
commands may return “Container not found” errors because not every Replicated install has every container.
Step 7: Validate migration success
Finally, test that your new Terraform Enterprise installation works properly. If you have an existing suite of release acceptance tests, you can use those instead of doing the following steps. We recommend testing capabilities that you use in production, as you would for a Terraform Enterprise upgrade. For example: If you are using sentinel or run tasks in production, we recommend testing a run that includes these integrations in a lower environment before deploying to production.
- Execute a plan and apply it from the CLI, testing several subsystems. Ensuring that proxies are correctly configured, certificates are properly configured, and the instance can download Terraform binaries and execute runs.
- Execute a plan and apply it from VCS, testing that webhooks are working and certificates are in place on both sides.
- Publish a new module to the private module registry.
- Execute a plan and apply it with a module or provider from the private registry to ensure the registry is functioning.
- (Optional) Execute a plan and apply it with Sentinel and cost estimation, ensuring run tasks and cost estimation work.
- (Optional) Execute a plan and apply it on a workspace that uses an agent pool, testing that external agents can connect and run jobs successfully.
Mounted Disk rollback steps
In the unlikely event you encounter issues and need to roll back, you can revert back to Terraform Enterprise (Replicated) using the following commands.
- Stop and disable Terraform Enterprise (Docker).
$ systemctl disable --now terraform-enterprise
- Start and enable Replicated.
$ systemctl enable --now replicated replicated-ui replicated-operator
- Start Terraform Enterprise (Replicated)
$ replicatedctl app start
External services to Docker
Before proceeding with this migration guide, make sure you meet all the prerequisites and that a Flexible Deployment Options license file has been provided by your HashiCorp business partner. Do not proceed with this guide if any of the prerequisites are not fulfilled.
When Terraform Enterprise is operating in active-active
mode, you can scale directly up to your target number of nodes after the migration is complete. You do not need to scale to one node before scaling to all nodes.
If at any point you need to revert your settings, see the rollback steps.
Migration steps
Step 1: Backup your data tier
We always recommend backing up your data tier before conducting any maintenance or migration. See the backup data guide under the prerequisites section for a detailed guide on how to move forward with the backup process.
Step 2: Upgrade Terraform Enterprise
Upgrade your Replicated-hosted Terraform Enterprise installation to v202309-1 or later. Refer to Upgrading for instructions.
Step 3: Verify Docker engine version
Docker should already be installed for you because Replicated is installed on this host. You may still need to install Docker Compose if you have not already. See the Docker Engine Requirements for more information.
If you cannot upgrade Docker Engine or install in a supported Docker configuration then install and migrate Terraform Enterprise to a different runtime.
Step 4: Generate Docker Compose configuration
Docker Compose configurations generated on releases older than v202410-1 may contain formatting errors. Terraform Enterprise may format passwords or secrets that contain special characters incorrectly if your configuration contains double quotation marks instead of single quotation marks.
To address this issue, you can manually replace double quotation marks with single quotation marks around secrets in your compose configuration.
We fixed this issue in v202410-1. Refer to the v202410-1 release notes for additional information.
To convert your existing configuration into a Docker Compose format on your current Terraform Enterprise installation (Replicated), follow these steps:
Create a directory at
/etc/terraform-enterprise
.Generate the Docker Compose configuration and save it to
/etc/terraform-enterprise/docker-compose.yml
using the following command:sudo docker exec terraform-enterprise tfectl app config --format docker > /etc/terraform-enterprise/docker-compose.yml
Your saved output should resemble the configuration file in our external services example. Incorporate the saved output values into this configuration file to ensure correctness.
Terraform Enterprise may generate a configuration that contains errors. Manually review the configuration to verify that it is complete and accurate.
Step 5: Prepare the host and install
To make migration smoother and faster, we recommend using the same host as your current Replicated instance.
Note: If you want to use a separate host for your new docker-based Terraform Enterprise, we can provide alternative steps. Reach out to support for assistance.
Complete the instructions for creating and applying TLS certificates. Refer to Create TLS certificates and Set up installation folders and files for instructions.
Review the configuration file from previous step. Update any values as needed before moving on to the next step. Pay special attention to placeholder values enclosed in
<>
, such asimage
andTFE_LICENSE
, and replace them with your actual values.Alternatively, you can use the external services installation instructions for Docker deployments as a starting point and adjust it to fit your environment.
Update, verify, and remove any unsuitable configuration variables that don't match the reality of your current Terraform Enterprise deployment. For a comprehensive list of available configuration settings, refer to the Configuration Reference.
To quickly identify many of the required configuration values, inspect the existing Terraform Enterprise application using the
replicatedctl app-config export
command.Log into the registry and then use the
docker
command to pull theterraform-enterprise
image version.$ cat <PATH_TO_HASHICORP_LICENSE_FILE> | docker login --username terraform images.releases.hashicorp.com --password-stdin
When prompted for a password, use the contents of your HashiCorp license file as your password.
$ docker pull images.releases.hashicorp.com/hashicorp/terraform-enterprise:<vYYYYMM-#>
Optionally create a new systemd service for Terraform Enterprise. Refer to Manage the Docker service for instructions.
Note: If you want to use a separate host for your new Docker-based Terraform Enterprise deployment, contact HashiCorp support for assistance.
Step 6: Stop Replicated and migrate
Next, you can stop Replicated and migrate your Terraform Enterprise installation to Docker.
SSH into your Terraform Enterprise (Replicated) instance.
Stop Terraform Enterprise (Replicated). Ensure the application has fully stopped before proceeding.
$ replicatedctl app stop
Start (and enable on start up) the Docker Compose based Terraform Enterprise.
$ systemctl enable --now terraform-enterprise
Check the status of your service with
systemctl status terraform-enterprise
, or usedocker ps
to find your container's name then rundocker logs [name]
. You can also runcurl https://[hostname]/_health_check
to check the health check endpoint. Terraform Enterprise should now be running using Docker Compose, so your Replicated services can be shut down and disabled.Shut down Replicated.
$ systemctl disable --now replicated replicated-ui replicated-operator
Next, stop and remove the unnecessary Replicated containers.
$ docker stop replicated-premkit
$ docker stop replicated-statsd
$ docker rm -f replicated replicated-ui replicated-operator replicated-premkit replicated-statsd retraced-api retraced-processor retraced-cron retraced-nsqd retraced-postgres
Note: Some of the docker stop
commands may return “Container not found” errors because not every Replicated install has every container.
Step 7: Validate migration success
Finally, test that your new Terraform Enterprise installation works properly. If you have an existing suite of release acceptance tests, execute those instead of doing the following steps. We recommend testing capabilities that you use in production, as you would for a Terraform Enterprise upgrade. For example: If you are using sentinel or run tasks in production, we recommend testing a run that includes these integrations in a lower environment before deploying to production.
- Execute a plan and apply it from the CLI, testing several subsystems. Ensuring that proxies are correctly configured, certificates are properly configured, and the instance can download Terraform binaries and execute runs.
- Execute a plan and apply it from VCS, testing that webhooks are working and certificates are in place on both sides.
- Publish a new module to the private module registry.
- Execute a plan and apply it with a module or provider from the private registry to ensure the registry is functioning.
- (Optional) Execute a plan and apply it with Sentinel and cost estimation, ensuring run tasks and cost estimation work.
- (Optional) Execute a plan and apply it on a workspace that uses an agent pool, testing that external agents can connect and run jobs successfully.
External Services rollback steps
In the unlikely event you encounter issues and need to roll back, you can revert back to Terraform Enterprise (Replicated) using the following commands.
- Stop and disable Terraform Enterprise (Docker).
$ systemctl disable --now terraform-enterprise
- Start and enable Replicated.
$ systemctl enable --now replicated replicated-ui replicated-operator
- Start Terraform Enterprise (Replicated)
$ replicatedctl app start
Mounted disk to Cloud-managed Kubernetes
You must provide an external PostgreSQL database server, external object storage, and external Redis storage. Refer to the prerequisites for deploying to Kubernetes for additional information.
If you currently use the mounted disk operational mode for Terraform Enterprise on Replicated, you do not meet the above requirements. You must first migrate to external services mode, and then follow the external services to Kubernetes migration guide as well as deploy an external Redis server. At a high level, this process involves:
- Backing up your data.
- Restoring your data to external services.
- Testing that the external services migration succeeded.
- Deploying external redis.
- Follow the guide for External Services to Kubernetes migration.
Contact your HashiCorp account representative or HashiCorp support if you have additional questions.
External services or Active/Active to Cloud-managed Kubernetes
Redis is a required service for running Terraform Enterprise in Kubernetes. If your deployment is in operating in external
mode, you must deploy an external Redis server.
If you are currently operating Terraform Enterprise in active-active
mode, then you have all required service dependencies for migrating to Kubernetes. Refer to the prerequisites for deploying to Kubernetes for additional information.
Before proceeding with this migration guide, make sure you meet all the prerequisites and that a Flexible Deployment Options license file has been provided by your HashiCorp business partner. Do not proceed with this guide if any of the prerequisites are not fulfilled.
When Terraform Enterprise is operating in active-active
mode, you can scale directly up to your target number of nodes after the migration is complete. You do not need to scale to one node before scaling to all nodes.
If at any point you need to revert your settings, see the rollback steps.
Migration steps
Step 1: Backup your data tier
We always recommend backing up your data tier before conducting any maintenance or migration. See the backup data guide under the prerequisites section for a detailed guide on how to move forward with the backup process.
Step 2: Upgrade Terraform Enterprise
Upgrade your Replicated-hosted Terraform Enterprise installation to v202309-1 or later. Refer to Upgrading for instructions.
Step 1: Prepare the custom Helm Values file for Terraform Enterprise
On Terraform Enterprise (Replicated), view existing configuration:
$ replicatedctl app-config export
Create a custom Helm Values file e.g
overrides.yaml
to override the default values in the Terraform Enterprise Helm chart.On the
env.secrets
andenv.variables
sections of the overrides values file, use the external services credentials from the Replicated installation for the Kubernetes installation. Specifically the following:The
TFE_OBJECT_STORAGE_TYPE
andTFE_OBJECT_STORAGE_*
variables should specify the object storage type and the container or bucket credentials from your Replicated installation.The
TFE_DATABASE_*
variables should specify database credentials from the Replicated installation.The
TFE_REDIS_*
values on the Helm chart should specify the same credentials from the external Redis in your Replicated installation.If there is an external vault, the
TFE_VAULT_*
values on the Helm chart should specify the same credentials from the external Vault in your Replicated installation.The
TFE_ENCRYPTION_PASSWORD
value should match the Replicated installation value. You can get this from your Replicated instance via SSH by running the following command:$ replicatedctl app-config export --template '{{ .enc_password.Value }}'
Refer to the Replicated to flexible deployments configuration mapping for more information on how the Replicated configuration maps to the variables and secrets on Terraform Enterprise Helm chart.
Refer to the example Kubernetes configuration for additional reference information about cloud-specific override values for the Helm deployment.
Step 2: Migrate to Kubernetes
Stop your Replicated installation by executing the following command:
$ replicatedctl app stop
Wait for the application to stop:
$ replicatedctl app status
Step 3: Validate migration success
Finally, test that your new Terraform Enterprise installation works properly. If you have an existing suite of release acceptance tests, you can use those instead of doing the following steps. We recommend testing capabilities that you use in production, as you would for a Terraform Enterprise upgrade. For example: If you are using sentinel or run tasks in production, we recommend testing a run that includes these integrations in a lower environment before deploying to production. You should be able to log in to your new Terraform Enterprise installation with the credentials previously used for your Replicated installation.
- Execute a plan and apply it from the CLI, testing several subsystems. Ensuring that proxies are correctly configured, certificates are properly configured, and the instance can download Terraform binaries and execute runs.
- Execute a plan and apply it from VCS, testing that webhooks are working and certificates are in place on both sides.
- Publish a new module to the private module registry.
- Execute a plan and apply it with a module or provider from the private registry to ensure the registry is functioning.
- (Optional) Execute a plan and apply it with Sentinel and cost estimation, ensuring run tasks and cost estimation work.
- (Optional) Execute a plan and apply it on a workspace that uses an agent pool, testing that external agents can connect and run jobs successfully.
Kubernetes rollback steps
In the unlikely event you encounter issues that cannot be worked around, you can rollback to Terraform Enterprise (Replicated).
If it is possible to
exec
into the pods, run thenode drain
command to stop Terraform Enterprise from executing further instructions.$ tfectl node drain --all
Uninstall the deployment.
$ helm uninstall terraform-enterprise
Restart Terraform Enterprise on Replicated using the same external services.
$ replicatedctl app start
Migrate to Podman
Before proceeding with this migration guide, make sure you meet all the prerequisites and that a Flexible Deployment Options license file has been provided by your HashiCorp business partner. Do not proceed with this guide if any of the prerequisites are not fulfilled.
Complete the following steps to migrate from Replicated to Podman.
The minimum Terraform Enterprise version necessary for Podman is v202404-1.
When Terraform Enterprise is operating in active-active
mode, you can scale directly up to your target number of nodes after the migration is complete. You do not need to scale to one node before scaling to all nodes.
Step 1: Prepare the host
We recommend deploying Terraform Enterprise to the same host as your current Replicated instance.
Contact HashiCorp support for assistance migrating your Terraform Enterprise installation to a separate host.
We recommend reusing your Replicated certificate to minimize the upgrade's effect on your other stack components.
Create a directory with the following:
- TLS certificate (
cert.pem
) - TLS private key (
key.pem
) - CA certificates bundle (
bundle.pem
)
If you do not have a CA certificates bundle, place your TLS certificate (
cert.pem
) insidebundle.pem
instead. Add your certificates to a folder on your host.If you cannot access your certificate, key, or bundle file, you can retrieve them from the Replicated Terraform Enterprise container. Run the following command to list the certificate paths in the container:
docker exec terraform-enterprise tfectl app config --unredacted | jq '{cert: .tls.cert_file, key: .tls.key_file, bundle: .tls.ca_bundle_file}'
Depending on your setup, the file paths may differ from the following example output:
{ "cert": "/etc/ssl/private/terraform-enterprise/cert.pem", "key": "/etc/ssl/private/terraform-enterprise/key.pem", "bundle": "/etc/ssl/private/terraform-enterprise/bundle.pem" }
You can then copy the files from the container into the host.
docker cp terraform-enterprise:/etc/ssl/private/terraform-enterprise/cert.pem <PATH_TO_CERTS_ON_HOST>/cert.pem docker cp terraform-enterprise:/etc/ssl/private/terraform-enterprise/key.pem <PATH_TO_CERTS_ON_HOST>/key.pem docker cp terraform-enterprise:/etc/ssl/private/terraform-enterprise/bundle.pem <PATH_TO_CERTS_ON_HOST>/bundle.pem
- TLS certificate (
Next, backup your Replicated configuration. Your Replicated configuration contains necessary information, such as the
<TFE_ENCRYPTION_PASSWORD>
asenc_password
and the<MOUNTED_DISK_PATH>
asdisk_path
.replicatedctl app-config export > replicated-app-config.backup.json
Create a yaml file based on the template for your current operational mode:
- Mounted Disk operational mode Kubernetes YAML example.
- External operational mode Kubernetes YAML example.
- Active/Active operational mode Kubernetes YAML example.
Replace the values enclosed in
<>
with your installation's values. For example, setTFE_HOSTNAME
to the DNS hostname you use to access Terraform Enterprise.
Step 2: Stop Terraform Enterprise and remove Replicated
Replicated runs using docker
, while Podman uses podman-docker
. Installing Podman removes docker, which is why we recommend backing up your data before stopping your Terraform Enterprise instance. Ensure you have backed up your data and Replicated configuration before proceeding.
Stop Terraform Enterprise (Replicated). Ensure the application has fully stopped before proceeding.
$ replicatedctl app stop
Shut down Replicated.
$ systemctl disable --now replicated replicated-ui replicated-operator
Next, stop and clean up your unnecessary Replicated containers.
$ docker stop replicated-premkit
$ docker stop replicated-statsd
$ docker rm -f replicated replicated-ui replicated-operator replicated-premkit replicated-statsd retraced-api retraced-processor retraced-cron retraced-nsqd retraced-postgres
Step 3: Install Podman
Verify that you have met the prerequisites for deploying to Podman before installing Terraform Enterprise on Podman. Follow the Podman installation guide.
Step 4: Download and install image
Log into the Terraform Enterprise container image registry using
terraform
as the username and your HashiCorp Terraform Enterprise license as the password:$ echo "<HASHICORP_LICENSE>" | podman login --username terraform images.releases.hashicorp.com --password-stdin
Pull the Terraform Enterprise image from the registry.
$ podman pull images.releases.hashicorp.com/hashicorp/terraform-enterprise:<vYYYYMM-#>
Step 5: Start a Terraform pod
Create a Terraform Enterprise pod by running the following command:
$ podman kube play <path_to_YAML_file>
In a separate terminal session, you can monitor the logs by running the following command:
$ podman logs -f <container_name>
Monitor the health of the application until it starts reporting healthy with the following command:
$ podman exec <container_name> tfe-health-check-status
Step 6: Validate migration success
Complete the following steps to verify that your new Terraform Enterprise installation works as expected. Alternatively, you can execute your existing suite of release acceptance tests. We recommend testing capabilities that you use in production, as you would for a Terraform Enterprise upgrade. For example: If you are using sentinel or run tasks in production, we recommend testing a run that includes these integrations in a lower environment before deploying to production.
- Execute a plan and apply it from the CLI to test several subsystems. This step ensures that proxies are correctly configured, certificates are properly configured, and that the instance can download Terraform binaries and execute runs.
- Execute a plan and apply it from version control to test that webhooks are working and certificates are in place on both sides.
- Publish a new module to the private module registry.
- Execute a plan and apply it with a module or provider from the private registry to ensure the registry is functioning.
- (Optional) Execute a plan and apply it with Sentinel and cost estimation policies enabled. This step ensures that run tasks and cost estimation function as expected.
- (Optional) Execute a plan and apply it on a workspace that uses an agent pool to verify that external agents can connect and run jobs successfully.
Mounted Disk rollback steps
Complete the following steps to revert to a Replicated deployment.
Stop Terraform Enterprise on Podman.
$ podman kube down <path_to_YAML_file>
Remove Podman.
$ dnf module remove -y container-tools
$ dnf remove -y podman-docker
Install Terraform Enterprise on Replicated.
If available, you can reuse the instance initialization script to reinstall Terraform Enterprise on Replicated. Otherwise, refer to the Replicated installation guide.
Mounted disk to Nomad
You must provide an external PostgreSQL database server, external object storage, and external Redis storage. Refer to the prerequisites for deploying to Nomad for additional information.
You must complete additional steps to migrate a Terraform Enterprise deployment on Replicated in disk
mode:
- Migrate your Replicated deployment to
external
mode - Verify that the migration succeeded.
- Deploy an external Redis server.
You can then complete the steps for migrating to Nomad in external
mode:
- Back up your data. Refer to the Backup a Mounted Disk Deployment for instructions.
- Restore your data to external services. Refer to the Terraform Enterprise recovery and restore - recommended pattern tutorial for instructions.
- Verify that the external services migration succeeded.
- Complete the steps for migrating to Nomad in
external
mode. Refer to External Services to Nomad migration for instructions.
Contact your HashiCorp account representative or HashiCorp support if you have additional questions.
External services or Active/Active to Nomad
Redis is required to run Terraform Enterprise in Nomad. If you are migrating a Replicated deployment in external
operational mode, you need to deploy an external Redis server.
If you are migrating a Replicated deployment in active-active
operational mode, you should already have all the required service dependencies. Refer to the prerequisites for deploying to Nomad for additional information.
Before proceeding, verify that you meet the prerequisites and that you have a Terraform Enterprise license file. Do not continue if any of the prerequisites are not fulfilled.
If you need to revert at any point, refer to Nomad rollback steps for instructions.
Migration steps
Complete the following steps to perform the migration.
Step 1: Backup your data tier
Back up your data tier before conducting any maintenance or migration. Refer to Back up data tier for instructions.
Step 2: Upgrade to a compatible version of Terraform Enterprise
The existing version of Terraform Enterprise must be able to run on non-Replicated runtimes. Refer to Upgrade existing Terraform Enterprise installation to a compatible version for upgrade instructions.
Step 3: Prepare the Nomad job file for Terraform Enterprise
Run the following command to view existing configuration:
$ replicatedctl app-config export
Create a Nomad job file for Terraform Enterprise. Refer to Configure Terraform Enterprise Nomad job specification for additional information.
In the Terraform Enterprise Nomad job specification, specify the following values:
- The
TFE_OBJECT_STORAGE_TYPE
andTFE_OBJECT_STORAGE_*
variables should specify the object storage type and the container or bucket credentials from your Replicated installation. - The
TFE_DATABASE_*
variables should specify database credentials from the Replicated installation. - The
TFE_REDIS_*
values on the Helm chart should specify the same credentials from the external Redis in your Replicated installation. - If Terraform Enterprise is connected to an external Vault server, the
TFE_VAULT_*
values on the Helm chart should specify the same credentials from the external Vault in your Replicated installation. - The
TFE_ENCRYPTION_PASSWORD
value should match the Replicated installation value. You can get this from your Replicated instance by connecting to the instance over SSH and running the following command:Refer to the Replicated to flexible deployments configuration mapping for details about the configurations.$ replicatedctl app-config export --template '{{ .enc_password.Value }}'
- The
Step 4: Migrate to Nomad
When Terraform Enterprise is operating in active-active
mode, you can scale directly up to your target number of nodes after the migration is complete. You do not need to scale to one node before scaling to all nodes.
Stop your Replicated installation by executing the following command:
$ replicatedctl app stop
Wait for the application to stop:
$ replicatedctl app status
Install Terraform Enterprise on Nomad. Refer to Deply Terraform Enterprise to Nomad for instructions.
Step 5: Validate migration success
Verify that your new Terraform Enterprise installation works properly. If you have an existing suite of release acceptance tests, you can use them instead of completing the following steps. We recommend testing capabilities that you use in production, as you would for a Terraform Enterprise upgrade. For example: If you are using sentinel or run tasks in production, we recommend testing a run that includes these integrations in a lower environment before deploying to production. You should be able to log in to your new Terraform Enterprise installation with the credentials previously used for your Replicated installation.
- Execute a plan and apply it from the CLI, testing several subsystems. Ensuring that proxies are correctly configured, certificates are properly configured, and the instance can download Terraform binaries and execute runs.
- Execute a plan and apply it from VCS, testing that webhooks are working and certificates are in place on both sides.
- Publish a new module to the private module registry.
- Execute a plan and apply it with a module or provider from the private registry to ensure the registry is functioning.
- (Optional) Execute a plan and apply it with Sentinel and cost estimation, ensuring run tasks and cost estimation work.
- (Optional) Execute a plan and apply it on a workspace that uses an agent pool, testing that external agents can connect and run jobs successfully.
Nomad rollback steps
Complete the following steps if an unresolvable issue emerges:
If you are able to connect to a job allocation using the
nomad alloc exec
command, run thenode drain
command to stop Terraform Enterprise from executing further instructions.$ tfectl node drain --all
Stop Terraform Enterprise job and purge it.
$ nomad job stop -purge -namespace=$namespace <terraform enterprise job name>
Optionally, cleanup Nomad variables, ACLs, and namespaces.
$ nomad var purge -namespace=$namespace <path to Nomad variables used by Terraform Enterprise job>
$ nomad acl policy delete -namespace=$namespace <policy name>
$ nomad namespace delete -force $namespace
Restart Terraform Enterprise on Replicated using the same external services.
$ replicatedctl app start
Troubleshooting
Refer to the following documentation to ensure you have uninterrupted visibility into the health of the application:
Common Issues
Below are a list of common migration issues and symptoms of those issues.
Self signed certificates CA not in CA bundle
Symptoms:
- Plans fail
- Errors in
/var/log/terraform-enterprise/task-worker.log
and/var/log/terraform-enterprise/atlas.log
, particularly when making calls to Archivist, where the certificate is from an unknown issuer.
Fix:
- Bring the additional certificates from the full chain certificate into the CA bundle
- Note that this action is partially automated when deploying to Replicated. For other runtimes, you may need to manually concatenate the certificates from the full chain certificate into the CA bundle for the instance to talk to itself.
Required CA not in CA bundle
Symptom:
- Setting up VCS fails with unknown certificate issuer error
Fix:
- Include the CA in the CA Bundle
Internal calls to instance or AWS Metadata Endpoint unnecessarily proxied
Symptom:
- Plans May fail, Logs may fail to load (but not always)
- The proxy directs traffic unexpectedly
Fix:
- When deployed to Replicated, Terraform builds much of the default
no_proxy
orNO_PROXY
address list, but you are responsible for managing the list when deploying to the other supported runtimes. In addition to manually adding the entries from your ReplicatedAdditional No Proxy List
configuration, add the following entries to theno_proxy
orNO_PROXY
address list:localhost
127.0.0.1
169.254.169.254
- FQDN of instance
- Rest of
no_proxy
list