In addition to creating and managing EBS snapshots, N2WS can store backups in Simple Storage Service (S3) and S3 Glacier, allowing you to lower backup costs when storing backups for a prolonged amount of time. N2WS allows you to create a lifecycle policy, where older snapshots are automatically moved from high-cost to low-cost storage tiers. A typical lifecycle policy would consist of the following sequence:
Store daily EBS snapshots for 30 days.
Store one out of seven (weekly) snapshots in S3 for 3 months.
Finally, store a monthly snapshot in S3 Glacier for 7 years, as required by regulations.
Storing snapshots in S3 is not supported for periods of less than 1 week.
Configuring a lifecycle management policy in N2WS consists of the following sequence:
Defining how many EBS snapshots to keep.
Enabling and configuring Backup to S3.
Optionally, enabling and configuring Archive to S3 Glacier.
Optionally, enabling and configuring Copy RDS to S3.
N2WS currently supports the copy of the following services to S3:
Using the N2WS Copy to S3 feature, you can:
Define multiple folders, known as repositories, within a single S3 bucket
Define the frequency with which N2WS backups are moved to a Repository in S3, similar to DR backup. For example, copy every third generation of an N2WS backup to S3.
Define backup retention based on time and/or number of generations per Policy.
N2WS stores backups in S3 as block-level incremental backups.
AWS Encryption at the bucket-level must be enabled. Bucket versioning must be disabled, unless Immutable Backups is enabled. See section 21.2.1.
Bucket settings are only verified when a Repository is created in the bucket. Avoid changing the bucket settings after Repository creation, as this may cause unpredictable behavior.
Only one S3 operation is allowed for a policy at a time – Copy, Recovery, Archive, or retention Cleanup.
For instance, an S3 Copy or S3 Recovery is not allowed when the S3 backup retention Cleanup is executing. If a new backup is created while a copy of a previous backup is still running, the new backup will not be copied to S3.
Likewise, only one Archive or Cleanup operation can run for a policy at a time, so if a new backup is created while another backup is being archived, or while cleanup is running for the policy, no Archive or Cleanup will be performed for the policy following the completion of the copy operation.
If the S3 Cleanup process is running at the time of an S3 Copy or Recovery, you can abort the Cleanup process to allow the Copy or Recovery process to continue. See section 21.6.3.
S3 buckets used by Copy to S3 should not be used by other applications.
Before continuing, consider the following:
Copy to S3 currently supports only backups of instances, independent volumes, and RDS. DynamoDB, etc. are not supported.
N2WS stores backups in S3 as block-level incremental backups.
Most N2WS operations related to the S3 repository (e.g., writing objects to S3, clean up, restoring, etc.) are performed by launching N2WS worker instances in AWS. The worker instances are terminated when their tasks are completed.
Only the copy of instances, independent volumes, and RDS backups is supported.
Copy to S3 is supported for weekly and monthly backup frequencies only. Daily backup copies to S3 are not supported.
Copy to S3 is not supported for other AWS resources that N2WS supports, such as Aurora.
Snapshots consisting of ‘AMI-only’ cannot be copied to an S3 repository.
Due to AWS service restrictions in some regions, the root volume of instances purchased from Amazon Marketplace, such as instances with product code, may be excluded from Copy to S3. The data volumes of such instances, if they exist, will be copied.
Backup records that were copied to S3 cannot be moved to the Freezer.
Users cannot delete specific snapshots from an S3 repository. S3 snapshots are deleted according to the retention policy. In addition, users can delete all S3 snapshots of a specific policy, account, or an entire repository. See sections 21.2.2 and 21.5.4.
A separate N2WS server, for example, one with a different “CPM Cloud Protection Manager Data” volume, cannot reconnect to an existing S3 repository.
To use the Copy to S3 functionality, the cpmdata policy must be enabled. See N2WS User Guide for details on enabling the cpmdata policy.
Due to the incremental nature of the snapshots, only one backup of a policy can be copied to S3 at any given time. Additional executions of Copy to S3 backups will not occur if the previous execution is still running. Restore from S3 is always possible unless the backup itself is being cleaned up.
AWS accounts have a default limit to the number of instances that can be launched. Copy to S3 launches extra instances as part of its operation and may fail if the AWS quota is reached. See AWS for details.
Copy and Restore of volumes to/from regions different from where the S3 bucket resides may incur long delays and additional bandwidth charges.
Instance names may not contain slashes (/) or backslashes (\) or the copy will fail.
S3 Sync operation may time out and fail if copy operation takes more than 12 hours.
21.1.2 Cost Considerations
N2W Software has the following recommendations to N2WS customers for help lowering transfer fees and storage costs:
When an ‘N2WSWorker’ instance is using a public IP (or NAT/IGW within a VPC) to access an S3 bucket within the same region/account, it results in network transfer fees.
Using a VPC endpoint instead will enable instances to use their private IP to communicate with resources of other services within the AWS network, such as S3, without the cost of network transfer fees.
For further information on how to configure N2WS with a VPC endpoint, see section Appendix A.
21.1.3 Overview of S3 and N2WS
The Copy to S3 feature is similar in many ways to the N2WS Disaster Recovery (DR) feature. When Copy to S3 is enabled for a policy, copying EBS snapshot data to S3 begins at the completion of the EBS backup, similar to the way DR works. Copy to S3 can be used simultaneously with the DR feature.
21.1.4 Storing RDS Databases in S3
N2WS can store certain RDS databases in an S3 Repository. This capability relies on the AWS ‘Export Snapshot’ capability, which converts the data stored in the database to Parquet format and stores the results in an S3 bucket. In addition to the data export, N2WS stores the database schema, as well as data related to the database’s users. This combined data set allows complete recovery of both the database structure and data.
21.1.5 Workflow for Using S3 with N2WS
Define an S3 Repository.
Define a Policy with a Schedule, as usual.
Configure the policy to include Copy to S3 by selecting the Lifecycle Management (Snapshot/S3/Glacier tab. Turn on the Backup to S3 toggle and complete the parameters.
If you are going to back up and restore S3 instances and volumes across accounts and regions, prepare a Worker Configuration using the Worker Configuration tab. See section 22.
Use the Backup Monitor and Recovery Monitor, with some additional controls, to manage S3 snapshots as usual.
21.1.6 Workflow for Copying RDS to S3
In AWS, create an Export Role with required permissions. See section 21.4.1.
In N2WS, define an S3 Repository. See section 21.2.1.
Define a Policy with a Schedule, as usual, and enable Export RDS.
Prepare a Worker Configuration using the Worker Configuration tab. See section 22.
21.2 The S3 Repository
21.2.1 Immutable Backups
S3 Repositories offer an option to protect the data stored in the bucket against deletion or alteration using S3 Object Locks. Usage of Immutable Backup protection will slightly increase total cost, because there is an additional cost associated with putting and removing the locks and with the handling of object versions.
18.104.22.168 Prerequisites for Enabling Immutable Backups
To use the Immutable Backup option, an S3 bucket must be created with the following requirements:
The S3 bucket containing the repository must be created with the Object Lock option enabled.
Versioning must be enabled.
The bucket must be encrypted.
AWS does not support enabling this option for an existing bucket, so it is not possible to enable Immutable Backup for existing repositories.
21.2.2 Configuring an S3 Repository
The cpmdata policy must exist before configuring an S3 Repository.
There can be multiple repositories in a single AWS S3 bucket.
In N2WS, select the Storage Repositories tab.
New menu, select S3 Repository.
In the New S3 Repository screen, complete the following fields, and select Save when complete.
Name - Type a unique name for the new repository, which will also be used as a folder name in the AWS bucket. Only alphanumeric characters and the underscore are allowed.
Description - Optional brief description of the contents of the repository.
User – Select the user in the list.
Account - Select the account that has access to the S3 bucket.
AWS Region - Select the region in which the S3 bucket is located.
S3 Bucket Name - Type the name of the S3 bucket that exists in this region.
Immutable Backup - Select to enable data protection by S3 Object Locks.
When complete, select Save.
AWS encryption must have been enabled for the bucket. Versioning must be disabled if Immutable Backup is not enabled.
21.2.3 Deleting an S3 Repository
You can delete all snapshots copied to a specific S3 repository.
Deleting a repository is not possible when the repository is used by a policy. You must change any policy using the repository to a different repository before the repository can be deleted.
Select the Storage Repositories tab.
Use the Cloud buttons to display the AWS S3 Repositories.
Select a repository.
Deleting a large number of objects from an S3 bucket may take up to several hours, especially if Immutable Backup is enabled. A notification alert is created when the deletions have completed.
21.3 The S3 Policy
To keep transfer fee costs down when using Copy to S3, create an S3 endpoint in the worker's VPC.
21.3.1 Configuring a Policy for Backup to S3
Configuring a Policy for Copy to S3 backups includes definitions for the following:
Name of the S3 Repository defined in N2WS.
Interval of AWS snapshots to copy.
Snapshot retention policy. Selecting the Delete instance snapshots from EBS after attempting to store to S3 option minimizes the time that N2WS holds any backup data in the EBS snapshots service. N2WS achieves that by deleting any EBS snapshot immediately after copying it to S3.
It is possible to retain a backup based on both time and number of generations copied. If both Time Retention (Keep backups in S3 for at least x time) and Generation Retention (Keep backups in S3 for at least x generations) are enabled, both constraints must be met before old snapshots are deleted or moved to Glacier, if enabled.
For example, when the automatic cleanup runs:
If Time Retention is enabled for 7 days and Generation Retention is disabled, S3 snapshots older than 7 days are deleted or archived.
If Run ASAP is executed 10 times in one day, none of the snapshots would be deleted until they are more than 7 days old.
If Generation Retention is enabled for 4 and Time Retention is disabled, the 4 most recent S3 snapshots are saved.
If Time Retention is enabled for 7 days and Generation Retention is enabled for 4 generations, a single S3 snapshot would be deleted, or archived, after 7 days if the number of generations had reached 5.
If Delete instance snapshots from EBS after attempting to store to S3 is enabled in Lifecycle Management, snapshots are deleted regardless of whether the Copy to S3 operation succeeded or failed.
In the left panel, select the Policies tab.
Select a Policy, and then select
Select the Lifecycle Management tab.
Select the number of (Native) Backup Generations to keep in the list.
Complete the following fields:
Backup to S3 – By default, Backup to S3 is disabled. Turn the toggle on to enable.
Store EBS snapshots in S3 based on the following settings:
Delete instance snapshots from EBS after attempting to store to S3 –If selected, N2WS will automatically set the Backup to S3 every n (EBS) Backup Snapshot Generations to 1 and will delete snapshots from EBS after performing the Copy to S3 operation.
Backup to S3 every n (EBS) Backup Snapshot Generations – Select the maximum number of backup snapshot generations to keep. This number is automatically set to 1 if you opted to Delete instance snapshots from EBS after storing in S3.
In the Keep backups in S3 for at least lists, select the duration and/or number of backup generations to keep.
In the Storage settings section, choose the following parameters:
Select the Target Repository in the S3 bucket to move the backup to, or select
New to define a new repository. If you define a new repository, select
Refresh before selecting.
Choose an S3 Storage Class that meets your needs:
Standard - (Frequent Access) for Frequent access and backups.
Infrequent Access - For data that is accessed less frequently.
Intelligent Tiering - Automatic cost optimization for S3 copy. Intelligent Tiering incorporates the Standard (Frequent Access) and Infrequent Access tiers. It monitors access patterns and moves objects that have not been accessed for 30 consecutive days to the Infrequent Access tier. If the data is subsequently accessed, it is automatically moved back to the Frequent Access tier.
See information on S3 Storage Class charges below.
If Archive to Glacier is enabled, select the Archive Storage class.
Storage Class charges:
S3 Infrequent Access and Intelligent Tiering have minimum storage duration charges.
You can recover an S3 backup to the same or different regions and accounts.
If you Recover Volumes Only, you can:
Select volumes and Explore folders and files for recovery.
Explore fails on non-supported file systems. See section 13.1.
Define Attach Behavior
Define the AWS Credentials for access
Configure a Worker in the Worker Configuration tab.
Clone a VPC
If you recover an S3 Instance, you can specify the recovery encryption key:
If Use Default Volume Encryption Keys is enabled, the recovered volumes will have the default key of each encrypted volume.
If Use Default Volume Encryption Keys is disabled, all encrypted volumes will be recovered with the same key that was selected in the Encryption Key list.
‘Marked for deletion’ snapshots can no longer be recovered.
To recover an S3 backup:
In the Backup Monitor tab, select a relevant backup that as a Lifecycle Status of 'Stored in S3', and then select
In the Restore from drop-down list of the Recover screen, select the name of the S3 Repository to recover from. If you have multiple N2WS accounts defined, you can choose a different target account to recover to.
In the Restore to Region drop-down list, select the Region to restore the S3 copy to. The source Region of the S3 copy is displayed in the Region column.
Continue with the regular recovery procedure for the resource:
To follow the progress of the recovery, select Open Recovery Monitor in the ‘Recovery started’ message
at the top right corner, or select the Recovery Monitor tab.
To abort a recovery in progress, in the Recovery Monitor, select the recovery item and then select
Abort Recover from S3.
21.3.3 Forcing a Single Full Copy
By default, Copy to S3 is performed incrementally for data modified since the previous snapshot was stored. However, you can force a copy of the full data for a single iteration to your S3 Repository. While configuring the Backup Targets for a policy with Copy to S3, select Force a single full Copy. See section 4.2.3.
This option is only available for Copy to S3.
21.3.4 Changing the S3 Retention Rules for a Policy
You can set different retention rules in each Policy.
To update the S3 retention rules for a policy:
In the Policies column, select the target policy.
Select the Lifecycle Management tab.
Update the Keep backups in S3 for at least lists for time and generations, as described in section 21.3, and select Save.
21.4 The Export RDS to S3 Policy
It is strongly advised that before deleting any original snapshots you perform a test recovery and verification of the recovered data/schema.
Exporting RDS databases to S3 is currently not supported by AWS for Osaka and GOV regions.
Default encryption keys for RDS export tasks are not supported.
RDS Export to S3 is currently not supporting Shared CMK encryption keys.
Currently, only MySQL and PostgreSQL databases are supported for exporting RDS to S3.
RDS Export to S3 is supported for databases residing in the same region where the S3 bucket is located.
AWS export Parquet format might change some data, such as date-time.
AWS does not support RDS export with stored procedure triggers.
Magnetic storage type export is not supported.
21.4.1 Configuring an AWS Export Role
In the AWS IAM Management Console, select Roles and then select Create role.
For the type of trusted entity, select AWS service.
In the Create role section, select the type of trusted entity: AWS service.
4. In the Choose a use case section, select RDS.
5. To add the role to the RDS database, in the Select your use case section, select RDS - Add Role to Database.
6. In the Review screen, enter a name for the role and select Create policy.
7. Create a policy and add the following permissions to the JSON:
8. After saving the role, in the Trust relationships tab:
a. Select Edit trust relationship.
b. Edit the Trust Relationship.
If there are multiple trust relationships, the code must be exactly as follows or the role will not appear in the Export Role list.
9. To the CPM role policy, add the following required permissions under Sid "CopyToS3":
Create a regular S3 policy as described in section 21.3.1, and select the RDS Database as the Backup Target.
The RDS Database security group must allow the default ports, or any non-default port the database is using, in the inbound rules.
Connection parameters are required and must be valid for backup. If not specified, the database will not be copied to S3.
2. In the Lifecycle Management tab:
a. Turn on the Backup to S3 toggle.
b. In the Storage settings section, select the Target repository.
c. In the Copy RDS to S3 section, select Enable RDS Copy.
d. In the Export Role list, select the AWS export role that you created.
e. In the Export KMS Key list, select an export KMS encryption key for the role.
The custom ARN KMS key must be on the same AWS account and region.
f. Select Save.
Only roles that include the export RDS Trusted Service, as created in section 21.4.1, are shown in the Export Role list.
3. In the Backup Targets tab, select the RDS Database, and then select
The policy can be saved without the complete configuration, but the copy will fail if the configuration is not completed before the policy runs.
If the target is added using a tag scan, the User name, Password, and Worker Configuration must be added manually afterward.
If the configuration is left blank, the target will not be copied, and a warning will appear in the backup log.
The username and password configured are for read-only access.
4. In the Policy RDS Copy to S3 Configuration screen, enter the following:
1. In the Database Credentials section, enter the database User name and Password.
2. Complete the Worker Configuration section.
If the database is private, you must choose a VPC and subnet that will allow the worker to connect to the database.
3. Select Apply.
21.4.3 Recovering RDS from S3
When recovering RDS to a different subnet group or VPC, verify that the source AZ also exists in the recovery target. If not, the recovery will fail with an invalid zone exception.
When recovering RDS from a native snapshot, using a different VPC and a subnet group that does not have a subnet in the target AZ, the recovery will also fail.
To recover an RDS database from S3:
In the Backup Monitor, select a backup and then select Recover.
In the Recover screen, select a database and then select Recover.
In the Basic Options tab, modify default values as necessary. Take into consideration any identified issues such as changing AZ, subnet, or VPC.
Select the Worker Configuration tab within the Recover screen.
Modify Worker values as necessary, making sure that the VPC, Security Group, and VPC Subnet values exist in the recovery target.
Select Recover RDS Database.
Follow the recovery process in the Recovery Monitor.
21.5 The Glacier Archive
21.5.1 Archiving Snapshots to S3 Glacier
Amazon S3 Glacier and S3 Glacier Deep Archive provide comprehensive security and compliance capabilities that can help meet regulatory requirements, as well as durable and extremely low-cost data archiving and long-term backup.
N2WS allows customers to use the Amazon Glacier low-cost cloud storage service for data with longer retrieval times.
N2WS can now backup your data to a cold data cloud service on Amazon Glacier by moving infrequently accessed data to archival storage to save money on storage costs.
S3 is a better fit than AWS' Glacier storage where the customer requires regular or immediate access to data.
Use Amazon S3 if you need low latency or frequent access to your data.
Use Amazon S3 Glacier if low storage cost is paramount, and you do not require millisecond access to your data.
Following are some of the highlights of the Amazon pricing for Glacier:
Amazon charges per gigabyte (GB) of data stored per month on Glacier.
Objects that are archived to S3 Glacier and S3 Glacier Deep Archive have a minimum of 90 days and 180 days of storage, respectively.
Objects deleted before 90 days and 180 days incur a pro-rated charge equal to the storage charge for the remaining days.
For more information about S3 Glacier pricing, refer to sections ‘S3 Intelligent – Tiering’ / ‘S3 Standard-Infrequent Access’ / ‘S3 One Zone - Infrequent Access’ / ’S3 Glacier’ / ’S3 Glacier Deep Archive’ at https://aws.amazon.com/s3/pricing/
21.5.3 Configuring a Policy to Archive to S3 Glacier
To configure archiving S3 backups to Glacier:
From the left panel, in the Policies tab, select a Policy and then select
Select the Lifecycle Management (Snapshot / S3 / Glacier) tab. See section 21.3.
Follows the instructions for Backup to S3. See section 21.3.1.
Turn on the Archive to Glacier toggle.
Complete the following parameters:
Move one expired S3 backup to Glacier every X period – Select the time interval between archived backups. Use this option to reduce the number of backups as they are moved to long-term storage (archived). If an S3 backup has reached its expiration (as defined by the Retention rules) and the interval between its creation time and that of the most recently archived backup is below the specified Interval period, the backup will be deleted from the repository and not archived.
Keep in Glacier for X period– Select the duration of the archive in Glacier.
Select the Archive Storage class:
Glacier - Designed for archival data that will be rarely, if ever, accessed.
Deep Archive - Solution for storing archive data that only will be accessed in rare circumstances.
The duration is measured from the creation of the original EBS snapshot, not the time of archiving.
21.5.4 Recovering Snapshots from Archive
Archived snapshots cannot be recovered directly from Glacier. The data must first be copied to S3 (‘retrieved’) before it can be accessed.
Once retrieved, objects will remain in S3 for the period specified by the Days to keep option. If the same snapshot is recovered again during this period, retrieved objects will be re-used and will not need to be retrieved again. However, attempting to recover the same snapshot again while the first recovery is still in the ‘retrieve’ stage will fail. Wait for the retrieval of objects to complete before attempting to recover again.
The process of retrieving data from Archive to S3 is automatically and seamlessly managed by N2WS. However, to recover an archived snapshot, the user should specify the following parameters:
Days to keep
Duration and cost of Instance recovery are determined by the retrieval tier selected. Depending on the Retrieval option selected, the retrieve operation completes in:
Expedited - 1-5 minutes
Standard - 3-5 hours
Bulk - 5-12 hours
A typical instance backup that N2WS stores in Glacier is composed of many data objects and will probably take much longer than a few minutes.
To restore data from S3 Glacier:
Follow the steps for Recovering an S3 Backup. See section 21.3.2.
In the Backup Monitor, select a successful Glacier copy, and then select
In the Restore from drop-down list, select the Repository where the data is stored.
In the Restore to Region list, select the target region.
Select the resource to recover and then select Recover.
Review and update the Resource Parameters as needed for recovery.
In the Archive Retrieve tab, select a Retrieval tier (Bulk, Standard, or Expedited), Days to keep, and then select Recover . N2WS will copy the data from Glacier to S3 and keep it for the specified period.
File-level recovery from archived snapshots is not possible.
21.6 Monitoring Lifecycle Activities
After a policy with Backup to S3 starts, you can:
Follow its progress in the Status column the Backup Monitor.
Abort the copy of snapshots to S3.
Stop S3 and Archive operations.
Delete S3 snapshots.
21.6.1 Viewing Status of Backups in S3 or Glacier
You can view the progress and status of S3 and archived backups in the Backup Monitor.
Select the Backup Monitor tab.
In the Lifecycle Status column, the real-time status of an S3 Copy is shown. Possible lifecycle statuses include:
Storing to S3 (n%)
Stored in S3
Not stored in S3 – Operation failed or was aborted by user.
Marked as archived – Some or all the snapshots of the backup were not successfully moved to Archive storage, either due to the user aborting the operation or an internal failure. However, the snapshots in the backup will be retained according to Archive retention policy, regardless of their actual storage.
Deleted from S3/Archive – Snapshots were successfully deleted from either S3 or Archive. See section 21.5.4.
Marked for deletion – The backup was scheduled for deletion according to the retention policy and will be deleted shortly.
‘Marked for deletion’ snapshots can no longer be recovered.
21.6.2 Aborting a Copy to S3 ‘In Progress’
The Copy to S3 portion of a Policy backup occurs after the non-S3 backups have completed.
Aborting an S3 Copy does not stop the non-S3 backup portion of the policy from completing. Only the Copy to S3 portion is stopped.
To stop an S3 Copy in progress:
In the Backup Monitor, select the policy.
When the Lifecycle Status is ‘Storing to S3 ...’, select
Abort Copy to S3 Snapshots.
21.6.3 Stopping an S3 Cleanup in Progress
If an S3 retention Cleanup is ‘In progress’, in the Policies tab, select the S3 policy and then select
Stop S3 / Archive Operations to stop the Cleanup. See the Information in section 21 for the reasons you might want to stop the S3 Cleanup.
Stopping S3 Cleanup does not stop the non-S3 cleanup portion of the policy from completing. Only the S3 cleanup portion is stopped.
Stopping S3 Cleanup of a policy containing several instances will stop the cleanup process for a policy as follows:
N2WS will perform the cleanup of the current instance according to its retention policy.
N2WS will terminate all S3 Cleanups for the remainder of the instances in the policy.
N2WS will set the session status to Aborted.
N2WS user will get an ‘S3 Cleanup of your policy aborted by user’ notification by email.
To stop an S3 Cleanup in progress:
Determine when the S3/Archiving is taking place by going to the Backup Monitor
Select the policy and then select
When the log indicates the start of the Cleanup, select
Stop S3 /Archive Operations.
21.6.4 Deleting Copy to S3 Snapshots in a Repository
When deleting Policies and Snapshots in the Policies tab or Account and Data in the Accounts tab, S3 copies are also deleted.
To delete only the snapshots copied to a specific S3 repository: