Mistake on this page? Email us

Maintaining Secure Factory Service

Upgrading Secure Factory Service

To install new Secure Factory Service Docker images, use the upgrade command:

$ ./sfn upgrade

The upgrade command does nothing if there are no new Docker images.

You can use the status command to compare the new Secure Factory Service Docker images with the running versions.

Removing Secure Factory Service

To remove a Secure Factory Service installation, use the uninstall command:

$ ./sfn uninstall

If Secure Factory Service is running, the uninstall command forces it to stop before removing the installation.

Uninstalling Secure Factory Service deletes all configuration and operational data. When you uninstall and reinstall Secure Factory Service on one node only, the node retrieves the deleted data when it synchronizes with the cluster. If you uninstall Secure Factory Service from all three nodes, you can restore the data using a database backup.

When you uninstall Secure Factory Service, the database backup files and service logs remain in the Secure Factory home directory.

To remove the entire Secure Factory home directory, including backup and log files, use the --clean flag:

$ ./sfn uninstall --clean

Renewing server certificates

To renew all Secure Factory Service and database TLS server certificates on a node, use the renew_certificates command:

$ ./sfn renew_certificates

Backing up and restoring nodes and HSMs

We recommend backing up Secure Factory Service to an external storage device after every configuration update.

Backing up and restoring Secure Factory Service involves:

  1. Backing up and restoring cluster configuration.
  2. Backing up and restoring the database.
  3. Backing up and restoring the HSMs.

Backing up and restoring cluster configuration

To create a backup of the cluster configuration file, run the export command on the working node terminal:

$ ./sfn export --output <cluster backup file name>

Example printout:

==================================================
Secure Factory Service
==================================================
Cluster resources compressed and encrypted under: /service-deployment/prod/cluster_backup_2020-02-02
File password: WkgbynIoYn9htw
NOTE: Password is mandatory for setting up secondary nodes.

By default, the output filename is cluster_export_YYYY-MM-DD.bin, and the file is generated in the script folder. You can use the --output argument to specify a different filename and path.

The Secure Factory Node (sfn) CLI tool generates a unique, random password for the new backup file. You need the password for the restore procedure. Without the password, you cannot restore the cluster configuration using this file.

To restore the cluster configuration from a backup file:

  1. Extract the contents of the ZIP archive:

    tar -xvzf secure_factory_<Secure Factory version>.tar.gz
    
  2. Run the setup --cluster-import command to set up the cluster node based on the backup file.

    $ ./sfn setup --cluster-import <path to cluster configuration backup file>
    

    The Secure Factory Node (sfn) CLI tool prompts you to enter the file password.

  3. Run the start command and wait until services are ready:

    $ ./sfn start
    

    You can check the services' status with the status command.

    The node is installed and ready when the status of all services is healthy.

Backing up and restoring the database

Database backup includes certificates, factory setup configuration and workstation configuration.

To back up the Secure Factory Service database, run the db backup command:

$ ./sfn db backup --name <backup file name>

Example printout:

==================================================
Secure Factory Node (Version 1.0.0-1)
==================================================
Starting MongoDB Database backup...
Backup created: /usr/local/arm/secure_factory/database/backup/db_backup_20200210-032047.gz

The Secure Factory Node (sfn) CLI tool creates the backup file in the /database/backup folder of the home directory.

To restore the database from a backup file, use the db restore command:

$ ./sfn db restore <backup file name>

Example printout:

==================================================
Secure Factory Node (Version 1.0.0-1)
==================================================
WARNING: Restoring a backup deletes the current data in the database and replaces it with the content from the selected backup.
Any content that has been created since the last backup will be lost forever.
Enter 'Y' or 'y' to proceed
y
Create backup restore point: restore_point_20200217-013355
Starting MongoDB Database restore process
Restore database from backup file: db_backup_20200217-013349

Note: Secure Factory creates a restore point backup as a precautionary measure.

Use the db list command to list all available backup files:

$ ./sfn db list

Example printout:

==================================================
Secure Factory Node (Version 1.0.0-1)
==================================================
Backup folder: /usr/local/arm/secure_factory/database/backup/
Total number of backups: 4
 - db_backup_20200209-224118
 - db_backup_20200209-223241
 - db_backup_20200209-220547
 - db_backup_20200217-013349

To delete a stored backup, use the db delete command:

  • Pass the --name / -n argument to delete a specific backup file:

    $ ./sfn db delete -n <backup file name>
    
  • Pass the --period / -p argument to delete all backups created prior to a period specified in days:

    $ ./sfn db delete -p <# of days prior to which all backups are deleted>
    

    For example:

    $ ./sfn db delete -p 30
    
  • Delete all stored backups using the --all command:

    $ ./sfn db delete --all
    

Tip: Use the db list command to list all available backup files.

Backing up and restoring the HSMs

We recommend you back up all partitions on your HSM after initial installation and when you make changes.

For a standard factory, you will have a single partition named SECURE_FACTORY. For a factory that holds your root of factories key, you will have an additional partition named ROOT_OF_FACTORIES.

Important: We recommend taking all backup measures to facilitate disaster recovery; however, backing up your root of factories CA certificate is especially critical because it enables deployed devices and devices you manufacture after disaster recovery to trust one another.

For detailed instructions on how to backup an HSM, please see the Backup and Restore HSMs and Partitions section in the Gemalto SafeNet Luna Network HSM 7.3 product documentation.

Replacing nodes

To replace a node:

  1. On the node you want to replace, if the node is running, run the stop command:

    $ ./sfn stop
    $
    
  2. On one of the two running nodes remaining:

    1. Run the db status command and check that there is a MongoDB cluster member with State: Primary:

      $ ./sfn db status
      $
      
    2. In the <extraction path>/prod/config.properties file, change the IP address/name of HOST1, HOST2 or HOST3.

    3. Run the setup command and pass the --cluster-update flag:

      $ ./sfn setup --cluster-update
      $
      
    4. Run the export command to export the common cluster configuration file:

      $ ./sfn export
      $
      

      This creates an encrypted output file and a key password that can decrypt the file:

      ==================================================
      Secure Factory Service
      ==================================================
      Cluster resources compressed and encrypted under: /service-deployment/prod/cluster_export_2019-12-18.bin
      File password: WkgbynIoYn9htw
      NOTE: Password is mandatory for setting up secondary nodes.
      

      By default, the output filename is cluster_export_YYYY-MM-DD.bin, and the file is generated in the script folder. You can use the --output argument to specify a different filename and path.

    5. Copy the cluster_export_YYYY-MM-DD.bin file to the new node. You need the password to decrypt the output file on the new node.

  3. On the new node:

    1. Extract the contents of the ZIP archive:

      tar -C <extraction path> -xvzf secure_factory_<Secure Factory version>.tar.gz
      

      Where <extraction path> is an existing directory to which you have administrator permissions.

    2. Set an installation path by defining the FACTORY_HOME_DIR environment variable:

      export FACTORY_HOME_DIR=<installation path>
      

      Where <installation path> must be different from the <extraction path> to which you extracted the tar.gz file.

      Note: Add export FACTORY_HOME_DIR=<installation path> to your .bashrc file to have the environment variable be available after signing out.

      If you do not set an installation path, the script installs Secure Factory Service in the /usr/local/arm/secure_factory path by default. In most systems, using this path requires root access rights or explicitly granting the user read and write access rights.

    3. Run the start command:

      $ ./sfn start
      $
      
  4. On the remaining running node (not the new node nor the node from which you extracted the cluster configuration file):

    1. Run the stop command:

      $ ./sfn stop
      $
      
    2. Extract the contents of the ZIP archive:

      tar -C <extraction path> -xvzf secure_factory_<Secure Factory version>.tar.gz
      

      Where <extraction path> is an existing directory to which you have administrator permissions.

    3. Set an installation path by defining the FACTORY_HOME_DIR environment variable:

      export FACTORY_HOME_DIR=<installation path>
      

      Where <installation path> must be different from the <extraction path> to which you extracted the tar.gz file.

      Note: Add export FACTORY_HOME_DIR=<installation path> to your .bashrc file to have the environment variable be available after signing out.

      If you do not set an installation path, the script installs Secure Factory Service in the /usr/local/arm/secure_factory path by default. In most systems, using this path requires root access rights or explicitly granting the user read and write access rights.

    4. Run the start command:

      $ ./sfn start
      $
      
  5. On the old node (the node you replaced) run the uninstall command and pass the -–clean flag:

    $ ./sfn uninstall -–clean
    $
    

Changing a node's IP address or name

To change a node's name:

  1. Change the node's name.

  2. Run the restart command on the node:

    $ ./sfn restart
    $
    

To change a node's IP address:

  1. On the node you want to replace:

    1. Run the stop command:

      $ ./sfn stop
      $
      
    2. Change the node's IP address.

  2. On one of the two running nodes remaining:

    1. Run the db status command and check that there is a MongoDB cluster member with State: Primary:

      $ ./sfn db status
      $
      
    2. In the <extraction path>/prod/config.properties file, modify the IP address of the node you updated.

    3. Run the setup command and pass the --cluster-update flag:

      $ ./sfn setup --cluster-update
      $
      
    4. Run the export command to export the common cluster configuration file:

      $ ./sfn export
      $
      

      This creates an encrypted output file and a key password that can decrypt the file:

      ==================================================
      Secure Factory Service
      ==================================================
      Cluster resources compressed and encrypted under: /service-deployment/prod/cluster_export_2019-12-18.bin
      File password: WkgbynIoYn9htw
      NOTE: Password is mandatory for setting up secondary nodes.
      

      By default, the output filename is cluster_export_YYYY-MM-DD.bin, and the file is generated in the script folder. You can use the --output argument to specify a different filename and path.

    5. Copy the cluster_export_YYYY-MM-DD.bin file to the node with the new IP address. You need the password to decrypt the output file on the new node.

  3. On the node with the new IP address:

    1. Extract the contents of the ZIP archive:

      tar -C <extraction path> -xvzf secure_factory_<Secure Factory version>.tar.gz
      

      Where <extraction path> is an existing directory to which you have administrator permissions.

    2. Set an installation path by defining the FACTORY_HOME_DIR environment variable:

      export FACTORY_HOME_DIR=<installation path>
      

      Where <installation path> must be different from the <extraction path> to which you extracted the tar.gz file.

      Note: Add export FACTORY_HOME_DIR=<installation path> to your .bashrc file to have the environment variable be available after signing out.

      If you do not set an installation path, the script installs Secure Factory Service in the /usr/local/arm/secure_factory path by default. In most systems, using this path requires root access rights or explicitly granting the user read and write access rights.

    3. Run the start command:

      $ ./sfn start
      $
      
  4. On the remaining running node (not node you updated nor the node from which you extracted the cluster configuration file):

    1. Run the stop command:

      $ ./sfn stop
      $
      
    2. Extract the contents of the ZIP archive:

      tar -C <extraction path> -xvzf secure_factory_<Secure Factory version>.tar.gz
      

      Where <extraction path> is an existing directory to which you have administrator permissions.

    3. Set an installation path by defining the FACTORY_HOME_DIR environment variable:

      export FACTORY_HOME_DIR=<installation path>
      

      Where <installation path> must be different from the <extraction path> to which you extracted the tar.gz file.

      Note: Add export FACTORY_HOME_DIR=<installation path> to your .bashrc file to have the environment variable be available after signing out.

      If you do not set an installation path, the script installs Secure Factory Service in the /usr/local/arm/secure_factory path by default. In most systems, using this path requires root access rights or explicitly granting the user read and write access rights.

    4. Run the start command:

      $ ./sfn start
      $