Initiates a workflow based on time. You can perform the following configurations in the Time trigger type. See the ‘how often should this run?’ - here you either define a recurrent time schedule, a fixed time schedule, or a cron expression.
Recurrent: It triggers the workflow at periodic intervals, every ‘x’ hours/days. This trigger is suitable for workflows invoking Lambda functions, Ops, or monitoring tasks. For example,
Trigger the workflow every hour
Trigger the workflow every day
Trigger the workflow every week
For instance, you can set the trigger for generating an Instance Utilization Report to be every 5 days (recurring).
Schedule: It triggers the workflow on a specific day of the week, at a specific time. If you want a workflow to start every day at a specific time, you must select all the days of the week. This trigger is suitable for CRON jobs, Ops tasks, DR tasks, periodic auditing tasks, etc.
You can select Monday from the Day drop-down menu, and 9:10 am from the Time drop-down menu, your workflow will start every Monday at 9:10 am
Cron: This trigger type triggers the workflow according to a "CRON expression.” With this, you have the flexibility to fine-tune the schedule according to your requirements. This trigger is suitable for a detailed schedule control that a normal Schedule trigger cannot fulfill.
This trigger helps initiate a workflow by using an HTTPS endpoint URL. Each workflow has a unique HTTPS endpoint that can be used to trigger the workflow by calling it manually, using a webhook, or using a script. This means a workflow can be triggered from an external application, for example:
When code is committed to a BItbutcket repository
When a JIRA ticket is filed
When a monitoring metric is coming in from a monitoring system, like ServiceNow
'Advanced Settings' in the HTTP Trigger Node enables you to configure an additional setting. When you click on it, you can see the checkbox ‘Override Global Variables With Http Input’. If you check this option, any data or payload that is received through the HTTP trigger is put into the Global Variables (Global Variables are universally defined variables that can be used across any node in a workflow, learn more here.)
Another option you will see is the ‘Input Transformer’. This enables you to define exactly which and how the parameters should be picked up from the HTTP trigger. For example, if you’re triggering a workflow from a JIRA ticket, we’ve pre-built a “jira-ticket-desc-parser” that will only pick up the description from the JIRA ticket, and no other parameters. Similarly, you can define such an input transformer using custom logic.
An alarm trigger can trigger a workflow in response to a CloudWatch alarm in your AWS cloud. Alarm trigger executes the workflow when a specified CloudWatch alarm changes its state to ‘ALARM’. You can select a specific SNS topic configuration, and the specific alarm to be the trigger. For example, based on when a CPUUtil alarm goes off, you can reboot the instance.
The Alarm trigger lets you configure the type of alarms to react to. The following are the parameters:
Metric Namespace: The service for which the alarm is raised. Example: EC2, EBS, etc..
Metric Name: The metric associated with the alarm. Example: CPUUtilization, DiskReadOps, etc..
Statistic (Optional): The statistic associated with the alarm. Example: Average, SampleCount, etc..
Dimension (Optional): The dimension of the alarm. Set this field to listen to a particular alarm. Multiple dimensions can be added to improve the particularity. Example: InstanceId, InstanceType, etc..
The alarm trigger node can capture the data associated with the alert to be utilized by the workflow. You need to enable the HTTP Trigger for this. Once enabled, the trigger payload will capture the alarm information. This data can be used in the workflow to target the resource the alarm is associated with.
The event trigger can trigger a workflow in response to an event in your AWS cloud. These AWS events could be generated when you perform some action in the AWS cloud (CloudTrail events), such as creating an instance, attaching a policy to an IAM role, etc.. or could be generated as a result of a change in a resource such as change of state of an EC2 instance from 'stopping' to 'stopped'.
Configuration of the event trigger must be carefully thought out as it is possible for an event-workflow loop to be executed infinitely. For example, If you set a Lambda event as a trigger and then simultaneously perform an action on a Lambda function in the same workflow that can generate the same event, it could end up in an event-workflow loop. This should be avoided at all costs as they could increase your expenditure considerably.
The Events trigger lets you configure the events to react to. The following are the parameters:
Service: The service associated with the event. Example: EC2, S3, etc..
Resource: The kind of event to listen to. Example: Api, State Change, etc..
Event Type: The type of event to listen to. Example: Aws Event, CloudTrail Event, etc..
Events: The specific events to listen to. Example: createSnapshot, stopped, etc..
The JSON pattern on the right side of the Event trigger customization pane lets you enter any particular configuration information. This is usually used in edge cases where you need to listen to events that are related to a particular resource. For example, if you need to listen to events related to a particular EBS snapshot, you can enter the ID of that snapshot in the pattern.
This is a simple manual trigger, meaning the workflow will only be executed when you manually come and trigger it to ‘run now’. For example, infrastructure deployment workflows can be manually triggered whenever needed.