Tayyab0101 Lv2Posted 2024-May-08 17:35
  
moving between cluster is not exactly like within a cluster. you have take backup to start with.
jerome_itable Lv3Posted 2024-May-09 13:04
  
You're right,

removing a node with data stored on it isn't directly supported in Sangfor HCI due to data integrity concerns. However, there are workarounds to achieve your goal of moving a node to the DR cluster with minimal downtime. Here's a recommended approach with best practices:

Preparation:

    Data Synchronization: Ensure your DR cluster is fully synchronized with the DC cluster. This minimizes data inconsistencies after the node move.
    Workload Analysis: Identify virtual machines (VMs) residing on the node you plan to move. Analyze their resource requirements and dependencies to plan the migration.
    Live Migration Testing (Optional): If your Sangfor HCI supports live migration, consider testing it beforehand with a low-priority VM. This helps identify any potential migration issues.

Node Move Process:

    Stop VMs: Power off or migrate VMs currently running on the designated node to be moved. Ensure no critical workloads are running on it.
    Data Offload (Optional): If the node holds crucial data you want to keep readily accessible on the DC cluster, consider using Sangfor storage migration tools to offload that data to remaining DC nodes before proceeding.
    Node Removal: In the Sangfor HCI management console, check if there's an option to "detach" or "evict" the node. This process might vary depending on the Sangfor HCI version.
        If there's no detach option and the disks aren't part of any virtual storage pool, you might be able to directly delete the node. Double-check the documentation for your specific version.
        Caution: If the node disks are part of the virtual storage pool, deleting it might lead to data loss.

DR Cluster Integration:

    Add Node: In the DR cluster management console, follow the Sangfor HCI guide to add the new node. This typically involves providing node details and network configuration.
    Data Resynchronization: After adding the node to the DR cluster, data resynchronization will commence to ensure consistency across all nodes. The duration depends on data volume.

Minimizing Downtime:

    Live Migration (Optional): If live migration is supported and tested successfully, use it to migrate VMs to other DC nodes before powering off the designated node. This minimizes downtime for those VMs.
    Phased Migration: Consider migrating VMs in batches to minimize the impact on overall service availability.

I Can Help:

Change

Moderator on This Board

2
1
0

Started Topics

Followers

Follow

11
8
5

Started Topics

Followers

Follow

3
14
3

Started Topics

Followers

Follow

2
2
0

Started Topics

Followers

Follow

Board Leaders