# 9  Additional Backup Topics

## 9.1 N2W in a VPC Environment <a href="#id-9-1-n-2-ws-in-a-vpc-environment" id="id-9-1-n-2-ws-in-a-vpc-environment"></a>

The N2W server runs in a VPC, except in old environments utilizing EC2 Classic. For N2W to work correctly, it will need outbound connectivity to the Internet. To use AWS endpoints, see [AWS Regions and Endpoints](https://docs.aws.amazon.com/general/latest/gr/rande.html).

* You will need to provide such connectivity using one of the following methods:
  * Attaching an Elastic IP.
  * Using a dynamic public IP, which is not recommended unless there is a dynamic DNS in place.
  * Enabling a NAT configuration, or
  * Using a proxy.
* You will need to access it using HTTPS to manage it and possibly SSH as well, so some *inward* access will need to be enabled.
* If you will run Linux backup scripts on it, it will also need network access to the backed-up instances.
* If N2W backup agents will need to connect, they will need access to it (HTTPS) as well.
* If backup scripts are enabled for a Linux backed-up instance, it will need to be able to get an *inbound* connection from the N2W Server.
* If a Thin Backup Agent is used in a Windows backed-up instance, the agent will need *outbound* connectivity to the N2W Server.

## 9.2 Backup when an Instance is Stopped <a href="#id-9-2-backup-when-an-instance-is-stopped" id="id-9-2-backup-when-an-instance-is-stopped"></a>

N2W continues to back up instances even if they are stopped. This may have important implications:

* If the policy has backup scripts and they try to connect to the instance, they will fail, and the backup will have **Backup Partially Successful** status.
* If the policy has no backup scripts and VSS is not configured, or if the policy’s options indicate that **Backup Partially Successful** is considered successful (section [4.2.2](https://docs.n2ws.com/user-guide/4-defining-backup-policies#4-2-2-adding-backup-targets)), the backup can continue running, and automatic retention will delete older backups. Every new backup will be considered a valid backup generation.
* Snapshots will soon take no storage space since there will be no changes in the volumes, and EBS snapshots are incremental.
* Assuming the instance shuts down in an orderly manner and did not crash, backups will be consistent by definition.

{% hint style="info" %}
N2W recommends that if you are aware of an instance that will be stopped for a while, you disable the policy by selecting its name and changing **status** to **disabled**.
{% endhint %}

Another way to proceed is to make sure the policy is not entirely successful when the instance is stopped by using backup scripts and to keep the default stricter option that treats script failure as a policy failure. This will make sure that the older generations of the policy, before it was stopped, will not be deleted.

{% hint style="warning" %}
If you disable a policy, you need to be aware that this policy will not perform backup until it is enabled again. If you disable it when an instance is stopped, make sure you enable it again when you need the backup to resume
{% endhint %}

## 9.3 The Freezer <a href="#id-9-3-the-freezer" id="id-9-3-the-freezer"></a>

Backups belonging to a policy eventually get deleted. Every policy has its number of generations, and the retention management process automatically deletes older backups.

To keep a backup indefinitely and make sure it is not deleted, move it to the Freezer. There can be several reasons to freeze a backup:

* An important backup of an instance you already recovered from so you will be able to recover the same instance again if needed.
* A backup of interest, such as the first backup after a major change in the system or after an important update.
* You want to delete a policy and only keep one or two backups for future needs.
* Elements in the freezer will not be deleted by the automatic **Cleanup** process.

**To move a backup to the Freezer:**

{% hint style="danger" %}
Once a backup is moved to the freezer, you will not be able to move it back.
{% endhint %}

1. In the left panel, select the **Backup Monitor** tab.
2. Select the backup and then select <img src="https://content.gitbook.com/content/5oB64hgFIX2jdQ2O72cF/blobs/S08GLmQuhWmKQRRbuFyp/freezer%20plain%20icon.png" alt="" data-size="line">**Move to Freezer**.
3. Type a unique name and an optional description for identification and as keywords for searching and filtering later.

After a backup is in the Freezer:

* Frozen backups are identified by the frozen symbol<img src="https://content.gitbook.com/content/5oB64hgFIX2jdQ2O72cF/blobs/S08GLmQuhWmKQRRbuFyp/freezer%20plain%20icon.png" alt="" data-size="line">in the **Lifecycle Status** column of the **Backup Monitor** tab.
* It will only be deleted if you do so explicitly. Use<img src="https://content.gitbook.com/content/5oB64hgFIX2jdQ2O72cF/blobs/OkeJ0ZFYTopb2Nm4kutj/delete%20icon.png" alt="" data-size="line"> **Delete Frozen Item**.
* If you delete the whole policy, frozen backups from the policy will remain.
* It is recovered the same way as from a regular backup.
* You can search and filter frozen backups using as keywords the name or description. To change the name or description, select <img src="https://content.gitbook.com/content/5oB64hgFIX2jdQ2O72cF/blobs/Q5aOzuQ1b5mnQgt6kktm/edit%20icon.png" alt="" data-size="line"> **Edit Frozen Item**.

While in the **Backup Monitor,** you can switch between showing backup records 'in the Freezer' by turning on and off the<img src="https://content.gitbook.com/content/5oB64hgFIX2jdQ2O72cF/blobs/i0LK50i59N3dVZqsussv/freezer%20toggle%20icon.png" alt="" data-size="line"> toggle key and backup records 'not in the Freezer' by turning on and off the <img src="https://content.gitbook.com/content/5oB64hgFIX2jdQ2O72cF/blobs/pcBNuAd32yPsnZsKXCiK/freezer%20icon.png" alt="" data-size="line"> toggle key in the **Show** <img src="https://content.gitbook.com/content/5oB64hgFIX2jdQ2O72cF/blobs/5wLitIummeT7R55rbJwD/freezer%20toggle%20key%20icon.png" alt="" data-size="line"> area on the far right of the filters line.

## 9.4 Running Automatic Cleanup <a href="#id-9-4-running-automatic-cleanup" id="id-9-4-running-automatic-cleanup"></a>

Automatic Cleanup allows you to manage the frequency of the cleanup process and the:

* Number of days to keep backup records, even if the backup is deleted.
* Number of days after which to rotate single AMIs.

{% hint style="info" %}
Keeping backups for long periods of time can cause the N2W database to grow and therefore affect the size you need to allocate for the CPM data volume. N2W Software estimates that every GiB will accommodate the backup of 10 instances. N2W Software estimates that 10 instances are correct when every record is kept for around 30 days. If you want to keep records for 90 days, triple the estimate, i.e., for 10 instances make the estimate 3 GiB, for 20 make the estimate 6 GiB, etc.
{% endhint %}

**To manage the number of generations saved:**

1. In the toolbar, select <img src="https://content.gitbook.com/content/5oB64hgFIX2jdQ2O72cF/blobs/HQNp8r5Q6ZPQAjRJy1vm/Server%20settings%20icon.png" alt="" data-size="line"> **Server Settings**.
2. In the **General Settings** tab, select **Cleanup**.
3. In the **Cleanup Interval** list, select the number of hours between cleanup runs. Select **Cleanup Now** to start a cleanup immediately.
4. In each list, select the number of days to:
   1. Rotate Single AMIs
   2. Keep Deleted Records
   3. Keep User Audit logs
   4. Keep Resource Control Records
5. To keep retry backup records for reporting, select **Keep Retry Backup Records**.

{% hint style="info" %}
The number of days is counted since the backup was created and not deleted. If you want to make sure every backup record is saved for 90 days after creation, even if it was already deleted, select 90.
{% endhint %}

The S3 Cleanup runs independently according to the retention period configured for the policy in the backup copy settings. See section [21.1](https://docs.n2ws.com/user-guide/broken-reference). The last S3 Cleanup log however is available in the **Cleanup** tab.

## 9.5 Backing up Independent Volumes <a href="#id-9-5-backing-up-independent-volumes" id="id-9-5-backing-up-independent-volumes"></a>

Backing up independent volumes in a policy is performed regardless of the volume's attachment state. A volume can be attached to any instance or not attached at all, and the policy will still back it up. Backup scripts can determine which instance is the active node of a cluster and perform application quiescence through it.

## 9.6 Excluding Volumes from Backup <a href="#id-9-6-excluding-volumes-from-backup" id="id-9-6-excluding-volumes-from-backup"></a>

{% hint style="info" %}
If you enable the **Exclude volumes** option in the **Tag Scan** tab of the **General Settings:**

* The **Exclude volumes** option overrides the exclusion of volumes performed through the UI.
* Tagged instances are not included in the **Exclude volumes** option and are excluded from backup only when tagged with ‘**#exclude’** for the policy.
  {% endhint %}

Following are the ways to exclude volumes from backup:

![](https://content.gitbook.com/content/5oB64hgFIX2jdQ2O72cF/blobs/0w1YMM5pQmoywoQWlEXy/9-6%20Excl%20Vol%20fr%20Scan-cropped.png)

* Enabling the **Exclude volumes** option in **General Settings**:
  * In the toolbar, select <img src="https://content.gitbook.com/content/5oB64hgFIX2jdQ2O72cF/blobs/HQNp8r5Q6ZPQAjRJy1vm/Server%20settings%20icon.png" alt="" data-size="line"> **Server Settings** > **General Settings**.
  * In the **Tag Scan** tab, select **Exclude volumes,** and then select **Scan Now**.
* Disabling a scheduled backup time. See section [4.1.4](https://docs.n2ws.com/user-guide/4-defining-backup-policies#4-1-4-disabled-times).
* Excluding a volume from a policy configuration in the UI. See section [4.2.3](https://docs.n2ws.com/user-guide/4-defining-backup-policies#4-2-3-instance-configuration).
* Using an ‘#exclude’ tag for the policy. See section [14.1.6](https://docs.n2ws.com/user-guide/14-tag-based-backup-management#14-1-6-excluding-volumes-from-backup).

## 9.7 Regions Disabled by Default <a href="#id-9-7-regions-disabled-by-default" id="id-9-7-regions-disabled-by-default"></a>

To perform certain actions on Asia Pacific (Hong Kong) and Middle East (Bahrain) AWS regions, managing Session Token Services (STS) is required, as Session Tokens from the global endpoint ([https://sts.amazonaws.com](https://sts.amazonaws.com/)) are only valid in AWS Regions that are enabled by default.

For AWS Regions not enabled by default, users must configure their AWS Account settings.

**To configure AWS Account settings to enable Session Tokens for all regions:**

1. Go to your AWS console and sign in at <https://console.aws.amazon.com/iam>​
2. In the navigation pane, select [**Account settings**](https://console.aws.amazon.com/iam/home#/account_settings)**.**
3. In the ‘Security Token Service (STS)’ section, select **Change Global endpoint**.
4. In the **Change region compatibility of session tokens for global endpoint** dialog box, select **Valid in all AWS Regions**.

{% hint style="info" %}
Session tokens that are valid in all AWS regions are larger. If you store session tokens, these larger tokens might affect your system.
{% endhint %}

For more information on how to manage your STS, see <https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_enable-regions.html>

## 9.8 Synchronizing S3 Buckets <a href="#id-9-8-synchronizing-s3-buckets" id="id-9-8-synchronizing-s3-buckets"></a>

You can automatically synchronize S3 buckets using the N2W S3 Bucket Sync feature. When the policy backup runs, N2W will copy the source bucket to the destination bucket, without creating a backup. The buckets are selected and configured in **Backup Targets** of the **Policies** tab.

{% hint style="warning" %}

* Bucket versioning is *not* supported. The latest version is automatically selected.
* If the source S3 bucket object is of the storage class Glacier or Deep Archive, it is not possible to synchronize the bucket. It is necessary to retrieve and restore the object before synchronizing the bucket.
  {% endhint %}

{% hint style="danger" %}

There is a time limitation when syncing between 2 S3 buckets. N2W will continue performing the synchronization as long as the **Maximum session duration** for the AWS Role is not exceeded. In the AWS IAM Console, go to the **Roles Summary** of the CPM instance and select **Edit** to configure the parameter.
{% endhint %}

**To synchronize S3 buckets**:

![](https://content.gitbook.com/content/5oB64hgFIX2jdQ2O72cF/blobs/W0mBMwVQZ9mflUMuBX3b/9-8%20Add%20S3%20Sync%20Targ-cropped.png)

1. In the **Policies** tab, select a policy and then select the **Backup Targets** tab.
2. In the **Add Backup Targets** menu, select **S3 Bucket Sync**. The Add S3 Bucket Sync screen opens.
3. Choose one or more buckets, and select **Add selected**. Selected buckets are removed from the table.
4. In the **Backup Targets** tab, for each newly added S3 bucket, select the bucket, and then select <img src="https://content.gitbook.com/content/5oB64hgFIX2jdQ2O72cF/blobs/bdc8yYXivvOVPuEqXLBQ/Configure%20icon.png" alt="" data-size="line"> **Configure**. The Policy S3 Bucket Sync Configuration screen opens.
5. In the Sync Source section, you have options to enter a **Source Prefix (Path)** and to select whether to **Keep Source Prefix at Destination**. This option will allow you to combine the source prefix with the destination prefix. For example, if the source prefix is ‘/a/b’ and the destination prefix is ‘/c/d’, the objects will be synchronized to ‘a/b/c/d’.
6. In the Sync Destination section, configure the following, and then select **Apply**:
   * **Region** – Select the destination region to copy to.
   * **Account** – Select the destination account to copy to.
   * **S3 Bucket** – Select the destination bucket. The account for the destination bucket may be different than the account for the source bucket. See [9.8.1](#id-9.8.1-cross-account-s3-bucket-sync) for cross-account S3 bucket sync.
   * **Destination Prefix (Path)** – Enter the destination prefix, if any. If a prefix is entered, the dynamic message under the box will display the destination prefix. If **Keep Source Prefix at Destination** was selected, the prefix will be the concatenation of the source and destination prefixes. For example, source prefix ‘abc’ and destination ‘xyz’ will result in a destination prefix of ‘abc/xyz’.
   * **Storage Class** – Select the S3 Storage Class or S3 Reduced Redundancy Storage:
   * **Standard** – For low latency and high throughput.
   * **Reduced Redundancy** - Enables customers to store non-critical, reproducible data at lower levels of redundancy than Amazon S3’s standard storage.
   * **Standard IA** - For data that is accessed less frequently, but requires rapid access. Ideal for long-term storage.
   * **Delete Extra** – Select to delete files that exist in the destination but not in the source during synchronization.

![](https://content.gitbook.com/content/5oB64hgFIX2jdQ2O72cF/blobs/35lPGz5du9QZm2bYxLJq/9-8%20Sync%20S3%20Bckts-cropped.png)

{% hint style="info" %}
If you change the Storage Class of an S3 Bucket in the **Policies** tab, the Storage Class of an existing destination bucket will not automatically update during the next S3Sync run. In the **Policies** tab, select the S3 Bucket Sync object, and then select <img src="https://content.gitbook.com/content/5oB64hgFIX2jdQ2O72cF/blobs/bdc8yYXivvOVPuEqXLBQ/Configure%20icon.png" alt="" data-size="line"> **Configure.**
{% endhint %}

After the Policy has run, view the backup log to see the S3Sync details.

![](https://gblobscdn.gitbook.com/assets%2F-MCmcYDqe7zxX8UChJRp%2F-MHCo8_u5eSrjQUJ8RIJ%2F-MHCtcjm_c3t7wjWw9H-%2FS3%20Bucket%20Sync%20Log.png?alt=media\&token=89bd42d7-53c3-48a8-8c35-e024dc4048d1)

### 9.8.1 Cross-Account S3 Bucket Sync

The destination S3 bucket policy must have two entries allowing the source account access to the bucket and the objects in the bucket.&#x20;

{% hint style="danger" %}
If either entry under **Resource** is missing, the S3 Sync will fail.&#x20;
{% endhint %}

For details, see: <https://n2ws.zendesk.com/hc/en-us/articles/28878273520797--AWS-S3SYNC-An-error-occurred-AccessDenied-when-calling-the-ListObjectsV2-operation-Access-Denied>

{% hint style="info" %}
For cross-account S3 Bucket Sync:

* If using a custom KMS, allow the same in the destination bucket policy.
* Cross-account S3 Bucket Sync is executed with the account of the policy, not the account of the destination S3 bucket, and requires cross-account access permissions to objects that are stored in the destination S3 bucket. For further information, see: <https://aws.amazon.com/premiumsupport/knowledge-center/cross-account-access-s3/>
* Allow access for the source account in the destination bucket by adding it to **Access Control List** in the AWS S3 console. To find the **Canonical ID**, in the AWS **Account** menu, go to **My security credentials** and scroll to **Account identifiers**.​
  {% endhint %}

## 9.9 Backing up SAP HANA Databases

SAP HANA Database is an in-memory relational database that can run on an AWS EC2 instance.

N2W creates and stores both an EC2 instance and SAP HANA database snapshots as part of a policy backup. SAP HANA snapshots are stored to an AWS Storage Repository. Backups are always full to enable fast restores.

{% hint style="warning" %}
Currently, AWS only supports one backup of an SAP Hana database in parallel. For the time being, N2W allows adding an SAP Hana database to only one Policy.
{% endhint %}

### 9.9.1 Prerequisites for Creating an SAP HANA Policy

Complete the following prerequisites before creating SAP HANA policies:

**AWS Command Line Interface (AWS CLI) Installation**

The latest version of the AWS Command Line Interface (AWS CLI) must be installed and functional on the target instance.

* To install or update, see [https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.htm](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html)l

**SSM Agent Installation**

SAP HANA policies require the installation of SSM agent on the EC2 instance *before* it is added to an N2W policy. AWS Backint agent configuration is performed by N2W at the time of policy configuration if the target instance is running. SAP HANA backup commands are sent to EC2 instances.&#x20;

* To install and run an SSM agent, see <https://docs.aws.amazon.com/systems-manager/latest/userguide/agent-install-sles.html>
* To check the status of the SSM Agent, see <https://docs.aws.amazon.com/systems-manager/latest/userguide/ssm-agent-status-and-restart.html>

{% hint style="info" %}

* The Backint configuration and the backup will **fail** if the target EC2 instance is stopped.
* N2W to SAP HANA EC2 instance communication is completed via an SSM agent. Therefore, assign an SSM IAM role with proper permissions to both N2W and the EC2 instance running SAP HANA. See section ‎[6.2.2](https://docs.n2ws.com/user-guide/6-windows-instances-backup#6.2.2-defining-and-attaching-an-iam-instance-profile-for-ssm).
  {% endhint %}

### 9.9.2 Supporting Cross-Account Backups

To support cross-account backups, configure the S3 bucket policy with the following permissions:

```
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Statement1",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::<AWS_ACCOUNT>:root"
            },
            "Action": [
                "s3:DeleteObject",
                "s3:PutObject",
                "s3:GetObject",
                "s3:GetBucketAcl",
                "s3:GetBucketPolicyStatus",
                "s3:ListBucket",
                "s3:PutObjectTagging"
            ],
            "Resource": [
                "arn:aws:s3:::<BUCKET_NAME>",
                "arn:aws:s3:::<BUCKET_NAME>/*"
            ]
        }
    ]
}
```

### 9.9.3 Creating an SAP HANA Policy

{% hint style="warning" %}
An SAP Hana database may be added to only one policy. See note in section [9.9](#id-9.9-backing-up-sap-hana-databases).
{% endhint %}

Before creating the policy, retrieve the Instance ID and Instance Number from the SYS.M\_SYSTEM\_OVERVIEW table in the SAP HANA SYSTEMDB database:

![](https://content.gitbook.com/content/5oB64hgFIX2jdQ2O72cF/blobs/kcUKTN3YtYk0N5FJpJo2/9-9%20SAP%20systemdb%20overview%20tbl%20cropped.png)

You can also check the following path: `/hana/shared/<SID>/HDB<instance>/`. In this case, the path is `/hana/shared/HXE/HDB90/`.

**To back up an SAP HANA database:**

{% hint style="warning" %}
Ensure that the target EC2 instance is running at the time of backup.
{% endhint %}

1\.      In the **Policy** tab, add the EC2 instance to the selected policy.

2\.      In the **Backup Targets** tab, select the instance, and then select![](https://content.gitbook.com/content/5oB64hgFIX2jdQ2O72cF/blobs/bdc8yYXivvOVPuEqXLBQ/Configure%20icon.png)**Configure**.

![](https://content.gitbook.com/content/5oB64hgFIX2jdQ2O72cF/blobs/9DNzkIVpqELt8oHg2vh9/9-9%20SAP%20config.png)

&#x20; 3\. Select **Enable SAP HANA Backup** and complete the configuration:

![](https://content.gitbook.com/content/5oB64hgFIX2jdQ2O72cF/blobs/35UXu4sVz8GGL72PFefJ/9-9%20SAP%20HANA%20config-cropped.png)

* **SAP HANA SYSTEMDB User** – SAP HANA System DB username.
* **Password** – SAP HANA System DB password.
* **SAP HANA SID** – SAP HANA System ID as shown in the SYS.M\_SYSTEM\_OVERVIEW table of the SYSTEMDB database.&#x20;
* **SAP HANA Instance Number -** SAP HANA Instance Number as shown in the SYS.M\_SYSTEM\_OVERVIEW table of the SYSTEMDB database.
* **SAP HANA S3** (Bucket) – S3 bucket repository for backup.
* **S3 KMS Key ARN** – S3 KMS key attached to the selected bucket.

&#x20; 4\. Select **Apply,** and then select **Save**.

## 9.10 Additional Permissions for RDS Custom Backup and Restore

Additional permissions are required for RDS Custom backup and restore operations:

* `s3:CreateBucket`
* `s3:PutBucketPolicy`
* `s3:PutBucketObjectLockConfiguration`
* `s3:PutBucketVersioning`
* `cloudtrail:CreateTrail`
* `cloudtrail:StartLogging`
* `kms:Decrypt`
* `kms:GenerateDataKey`

For full details about setting up your RDS Custom database for an SQL Server environment, see

<https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/custom-setup-sqlserver.html#custom-setup-sqlserver.iam-user>


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.n2ws.com/user-guide/9-additional-backup-topics.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
