Archiving Compressed S3 Data into S3 Glacier

I’ll expand on the workflow and what each node does. For this process, we only employ 2 AWS services- S3 and Data Pipeline.

Step 1: Triggering the workflow

The trigger nodes determine what set of action leads to the workflow being activated. In this case, it’s a trigger from an external application. This could be a request from a web page or an app, etc.

Step 2: A Custom Code to collate the S3 Data

This node has its custom code present that does the job of collecting the S3 Data from your bucket and preparing it to be redirected. The sourceBucket is from where the data is taken and the targetBucket is where the data will be moved to.

Step 3: Create DataPipeline

This action node creates the data pipeline where the S3 data will be compressed.

Step 4: A custom code to push S3 Data into the pipeline

Step 5: Pipeline Definition

This node configures the compression of the S3 Data that is moved into the pipeline and ensure the transfer of it to S3 Glacier

Step 6: Pipeline Activation

Step 7: Delay

A 600-second delay is set to allow the data transfer to happen before the next node is activated.

Step 8: Delete DataPipeline

This action node deletes the data pipeline after the compression and archiving is successfully done.