Adding AWS Timestream to Prometheus.

In a continuation of my existing build (Prometheus in Fargate), I have been playing around with remote storage options and for the purposes of this document, I am using AWS Timestream.

Prometheus out of the box will store up to 15 days of metrics, but these are lost should the instance/server be terminated. With this in mind, it is worthwhile looking into storage options for metric data with a few options being:

I am sure there are other options available but these are the most common I found when researching. For the sake of my sanity and the many rewrites I have already carried out for this document I am not going into detail regarding any of the above options, and simply showing how I built and updated my current Prometheus build to use the Amazon option.

The Amazon Timestream option requires the use of an adapter which can be configured in Prometheus as a remote_write option to store data. This adapter only uses write and as I am using Grafana for my dashboards I’ll need to install another plugin/adapter that allows connection to my Timestream DB.

Creating the ECR repository

Before starting you need to add an additional repository to the ECR and for this I have kept it simple and descriptive by calling it timestream-adapter.

This is where the build will be stored and called by the task_definition.json file in the Prometheus module.

Cloning and building the Timestream adapter

Using the existing build the folder structure is as follows:

We will create a new folder called timestream

Navigate to this folder and clone the adapter created by Dennis Pattmann :

Once cloned you need to prepare two of the files to suit your needs. The files are main.go and the Dockerfile. I had a few issues when first building this and actually reached out to Dennis in order to discuss. The Dockerfile you clone is actually incorrect and although it does build successfully, I did receive x509 errors in the Cloudwatch logs.

The Dockerfile when complete will look like:

and main.go has a few configurable under func init()

Once these have been updated you are ready to build.

Once built you can push this to the waiting repository in AWS.

Obtain an authentication token and authenticate Docker:

Tag the image appropriately:

Push the build to ECR:

Amazon Timestream

As stated above the current version of Terraform does not allow me to set up a Timestream database and table, so this has been completed manually using the AWS console.

Log in to the console and choose Amazon Timestream from the list of AWS services under Database (search if easier).

Create a database called prometheus by clicking the top right hand create database button.

Once created click on the database called prometheus, click the tables tab and then click create table:

I called my table prometheus-table, configuring the Memory store retention to 12 hours and the Magnetic store retention to 1 year.

Updating IAM permissions

Pretty much the last thing I did was add permissions allowing the ECS task the ability to talk to Timestream. I wanted to follow the Cloudwatch logs to ensure it was working correctly before altering permissions and was relieved to see the ‘Access denied’ reference.

A reminder of the folder structure so far (including previous build):

For the purposes of this document though I am adding here. Update the ECS module file called to include the timestream policy. In this build, I am using the AWS policy called AmazonTimestreamFullAccess but would suggest practising the ‘least privilege’ method for your own builds.


I added the above under the last entry for role_policy_attachment.

Adding an additional CloudWatch log group

Update with a new entry for aws_cloudwatch_log_group. This will be used by the task definition later on in this document.


Updating the Task Definition

Navigate to modules/app/templates and open up the task_definition.json file. For this, we are going to add a new entry after the coveo/ecs-exporter section.

Don’t forget the comma ^^^ at the end.

The new entry will more than likely change in the future as I convert to variables etc. For the purposes of this build, we can manage with the above.

Target Groups and Listener update

The last thing we need to do with the infrastructure is to update the target group to include port 9201. Navigate to modules/app and open

Add the following after “aws_alb_target_group” “alertmanager”:

Adding remote_write

In order for Prometheus to use the timestream-adapter you need to add a reference to it in the prometheus.yml file and rebuild the image. Navigate to prometheus/docker/prometheus and open prometheus.yml.

Add the following :

Rebuild the image as before:

Tag the image:

Push the image to ECR:

And finally

At this point, you should be ready to execute a Terraform apply. In my environment, I stopped the prometheus task in order to force it to restart a new task with the changes, and after a few minutes I could not only see logs for the timestream task but data in the Timestream DB created.

I have gone over this document should have captured all the steps I took in order to build out the Timestream DB in AWS and utilize the adapter written by Dennis in Go. If however, you do notice something I have missed or want to give me any feedback, please leave me a comment.