Enabling Amazon S3 integration will allow the GroundRunner to be able to archive their Chain Command logs in an S3 bucket. The Chain Command logs have always been visible via OneCloud Integration Studio and will now be optionally stored for additional support diagnostics and audit capabilities.
A Quick Note
Depending on the total number of GroundRunners deployed, it can take up to one hour to have the settings propagated down. Also, all necessary firewall ports and whitelisting will need to be opened to Amazon S3.
To get started you will need to be an Administrator on the OneCloud platform. After you have logged in you will need to navigate to the Integrations menu under the Admin portal. The integration configuration will apply to all GroundRunners configured to your tenant and all future GroundRunners will adopt the settings at activation.
Navigation Steps: Applications -> Admin -> Integrations -> Amazon S3
Each of the following configuration properties will need to be obtained for an authorized user of the Amazon S3 bucket.
- S3 Region: Geographical data center region of your S3 bucket. Example: us-east-1
- S3 Bucket: S3 bucket name. Example: onecloud-runner-logs
- Path to Upload: Data folder path below the S3 bucket level. Example: "/" to place in the root of the bucket.
- S3 Access Key: Access key assigned to the S3 user.
- S3 Access Secret: Access secret assigned to the S3 user.
- S3 Endpoint: Optional URL to access the Amazon S3 bucket if different than the default.
A Quick Note
The S3 Bucket and Path to Upload must exist prior to executing the first OneCloud chain. The configuration process does not actively test for compliance at this time.
Once the configuration has been updated, saved and automatically distributed into the Runners, your Chain Command logs will automatically start uploading into your Amazon S3 bucket. The logs files being sent to the Amazon S3 bucket consist of an output file and an error file. The naming convention of the file is: ChainExecutionId_CommandExecutorId_CommandResultId_Type_Log.log
The main thing to note is that each Chain Command will post two files into your Amazon S3 bucket. Using another OneCloud Chain or the Amazon S3 Interface we can see the following files have been posted:
Each of the log files that is uploaded contains slightly different information. If there are no errors contained within your Chain Command it is possible they will be identical. It is only when there is an issue will they be different.
- The purpose of the output file is to capture any output the process may have generated. This output is normally see on the logs tab of the Chain Command after it has processed.
- The purpose of the error file is to capture any output the process may have generated during an error. This information is not normally seen and can contain detailed debugging information needed for error resolution or for OneCloud Support.
|Sample Output File||Sample Error File|
A Quick Note
To help control the size of the Amazon S3 bucket, in terms of number of files and size; consider implementing Amazon S3 Lifecycles to apply retention policies to the files being stored.