The V3IO TSDB CLI (tsdbctl)
Overview
The V3IO TSDB includes the V3IO TSDB command-line interface ("the TSDB CLI"), which enables users to easily create, update, query, and delete time-series databases (TSDBs), as demonstrated in this tutorial. Before you get started, read the setup and usage information in this section and review the TSDB software specifications and restrictions.
Setup
The TSDB CLI can be run locally on a platform application cluster or remotely from any computer with a network connection to the cluster.
The platform's web shell and Jupyter Notebook services include a compatible Linux version of the TSDB CLI — $PATH
) to simplify execution from anywhere in the shell.
For remote execution, download the CLI from the V3IO TSDB GitHub repository.
In the web shell and Jupyter terminal environments there's also a predefined tsdbctl
alias to the native CLI that preconfigures the
- Version 3.6.1 of the platform is compatible with version 0.13 of the V3IO TSDB. Please consult Iguazio's support team before using another version of the CLI.
- When using a downloaded version of the CLI (namely for remote execution), it's recommended that you add the file or a symbolic link to it (such as
tsdbctl
) to the execution path on your machine ($PATH
), as done in the platform command-line environments. For the purpose of this tutorial, it's assumed thattsdbctl is found in your path and is used to run the relevant version of the CLI.
Reference
Use the CLI's embedded help for a detailed reference:
-
Run the general
help command to get information about of all available commands:tsdbctl help
-
Run
tsdbctl help <command>
ortsdbctl <command> -h
to view the help reference for a specific command. For example, use either of the following variations to get help for thequery command:tsdbctl help query tsdbctl query -h
Mandatory Command Configurations
All CLI commands demonstrated in this tutorial require that you configure the following flags.
This can be done either in the CLI command itself or in a configuration file.
As explained in the Setup section, when running the CLI locally from an on-cluster web shell or Jupyter terminal, you can use the tsdbctl
alias, which preconfigures the --server
and --access-key
flags.
-
User-authentication flags — one of the following alternatives:
- For access-key authentication —
-k|--access-key — a valid access key for logging into the configured web-APIs service. You can get the access key from theAccess Keys window that's available from the dashboard user-profile menu, or by copying the value of theV3IO_ACCESS_KEY environment variable in a web-shell or Jupyter Notebook service.Note- The
tsdbctl
alias that's available in the platform's web shell and Jupyter terminal environments preconfigures the--access-key
flag for the running user. - When running the native V3IO TSDB CLI locally — for example, from a Jupyter notebook, which doesn't have the
tsdbctl
alias — you can set the-k or--access-key flag to$V3IO_ACCESS_KEY
.
- The
- For username-password authentication —
- For access-key authentication —
-
-s|--server — the endpoint of your platform's web-APIs (web-gateway) service. Thetsdbctl
alias that's available in the platform's web shell and Jupyter terminal environments preconfigures this flag for the running user. If you're not using the alias — for example, if you're running the native TSDB CLI from a Jupyter notebook or remotely — set this flag to<web-APIs IP>:<web-APIs HTTP port>
:<web-APIs IP>
— the IP address of the web-APIs service; for example,webapi.default-tenant.app.mycluster.iguazio.com
. The IP address is stored in aV3IO_WEBAPI_SERVICE_HOST
environment variable in the platform's web shells and Jupyter notebooks and terminals You can also get this address from the web-APIs HTTPS URL: copy theHTTPS API link of the web-APIs service (webapi
) from theServices dashboard page, and then removehttps://
from the start of the URL.<web-APIs HTTP port>
— the HTTP port of the web-APIs service. The port number is stored in aV3IO_WEBAPI_SERVICE_PORT
environment variable in the platform's web shells and Jupyter notebooks and terminals.
-
-c|--container — the name of the parent data container of the TSDB instance (table). For example,"projects"
or"mycontainer"
. -
-t|--table-path — the path to the TSDB instance (table) within the configured container. For example"my_metrics_tsdb"
or"tsdbs/metrics"
. (Any component of the path that doesn't already exist will be created automatically.) The TSDB table path should not be set in a CLI configuration file.
Some commands require additional configurations, as detailed in the command-specific documentation.
Using a Configuration File
Some of the CLI configurations can be defined in a YAML file instead of setting the equivalent flags in the command line.
By default, the CLI checks for a
You can use the template
To simplify the examples in this tutorial and focus on the unique options of each CLI command, the examples assume that you have created a tsdbctl
alias, which preconfigures these flags for the running user:
-
webApiEndpoint — the equivalent of the CLI-s|--server flag. -
container — the equivalent of the CLI-c|--container flag. -
accesskey — the equivalent of the CLI-k|--access-key flag.Alternatively, you can set the following flags for username-password authentication:
username — the equivalent of the CLI-u|--username flag.password — the equivalent of the CLI-p|--password flag.
Following is an example configuration file.
Replace the IP address and access key in the values of the
# File: v3io-tsdb-config.yaml
# Description: V3IO TSDB Configuration File
# Endpoint of an Iguazio AI Platform web APIs (web-gateway) service,
# consisting of an IP address or resolvable host domain name
webApiEndpoint: "webapi.default-tenant.app.mycluster.iguazio.com"
# Name of an Iguazio AI Platform container for storing the TSDB table
container: "projects"
# Authentication credentials for the web-APIs service
accessKey: "MYACCESSKEY"
# OR
#username: "MYUSER"
#password: "MYPASSWORD"
For example, the following CLI command for getting information about a "mytsdb" TSDB in the "projects" container —
tsdbctl info -c projects -t mytsdb -n -m -s webapi.default-tenant.app.mycluster.iguazio.com -k MYACCESSKEY
— is equivalent to the following command when the current directory has the aforementioned example
tsdbctl info -t mytsdb -n -m
As indicated above, you can override any of the file configurations in the command line.
For example, you can add -c metrics
tsdbctl info -t mytsdb -n -m -c metrics
Creating a New TSDB
Use the CLI's "[0-9]+/[smh]"
(where 's
' = seconds, 'm
' = minutes, and 'h
' = hours); for example, "1/s"
(1 sample per second), "20/m"
(20 samples per minute), or "50/h"
(50 samples per hour).
It's recommended that you set the rate to the average expected ingestion rate for a unique label set (for example, for a single server in a data center), and that the ingestion rates for a given TSDB table don't vary significantly; when there's a big difference in the ingestion rates (for example, x10), consider using separate TSDB tables.
Examples
The following command creates a new "tsdb_example" TSDB in the configured "projects" container with an ingestion rate of one sample per second:
tsdbctl create -t tsdb_example -r 1/s
Defining TSDB Aggregates
You can optionally use the "avg"
(average sample values) or "max,min,last"
(maximum, minimum, and latest sample values).
When configuring the TSDB's pre-aggregates, you should also use the "[0-9]+[mhd]"
(where 'm
' = minutes, 'h
' = hours, and 'd
' = days); for example, "90m"
(90 minutes = 1.5 hours) or "2h"
(2 hours).
The default aggregation granularity is one hour (1h
).
-
You can also perform aggregation queries for TSDB tables without pre-aggregates, but when configured correctly, pre-aggregation queries are more efficient. To ensure that pre-aggregation is used to process aggregation queries and improve performance —
- When creating the TSDB table, set its aggregation granularity (
-i|--aggregation-granularity ) to an interval that's significantly larger than the table's metric-samples ingestion rate (-r|--ingestion-rate ). - When querying the table, set the aggregation interval (
-i|--aggregation-interval ) to a sufficient multiplier of the table's aggregation granularity. For example, if the table's ingestion rate is 1 sample per second ("1/s"
) and you want to user hourly queries (i.e., use a query aggregation interval of"1h"
), you might set the table's pre-aggregation granularity to 20 minutes ("20m"
).
- When creating the TSDB table, set its aggregation granularity (
-
When using the aggregates flag, the CLI automatically adds
count
to the TSDB's aggregators. However, it's recommended to set this aggregator explicitly if you need it. -
Some aggregates are calculated from other aggregates. For example, the
avg
aggregate is calculated from thecount
andsum
aggregates.
The following command creates a new "tsdb_example_aggr" TSDB with an ingestion rate of one sample per second in a count
, avg
, min
, and max
aggregators and an aggregation interval of 1 hour:
tsdbctl create -t tsdb_example_aggr -r 1/s -a "count,avg,min,max" -i 1h
Supported Aggregation Functions
Version 0.13 of the CLI supports the following aggregation functions, which are all applied to the samples of each metric item according to the TSDB's aggregation granularity (interval):
avg — the average of the sample values.count — the number of ingested samples.last — the value of the last sample (i.e., the sample with the latest time).max — the maximal sample value.min — the minimal sample value.rate — the change rate of the sample values, which is calculated as<last sample value of the previous interval> - <last sample value of the current interval>) / <aggregation granularity>
.stddev — the standard deviance of the sample values.stdvar — the standard variance of the sample values.sum — the sum of the sample values.
Adding Samples to a TSDB
Use the CLI's
The ingestion input can be provided in one of two ways:
-
Using command-line arguments and flags —
-
metric argument [Required] — a string containing the name of the ingested metric. For example,"cpu"
. -
labels argument [Optional] — a string containing a comma-separated list of<label name>=<label value>
key-value pairs. The label values must be of type string and cannot contain commas. For example, ."os=mac,host=A"
-
-d|--values flag [Required] — a string containing a comma-separated list of sample data values. The values can be of type integer or float and cannot contain periods or commas; note that all values for a given metric must be of the same type. For example, ."67.0,90.2,70.5"
-
-m|--times flag [Optional for a single metric; Required for multiple samples] — a string containing a comma-separated list of sample generation times ("sample times") for the provided sample values. A sample time can be specified as a Unix timestamp in milliseconds or as a relative time of the format"now"
or (where '"now-[0-9]+[mhd]"
m
' = minutes, 'h
' = hours, and 'd
' = days). For example, ."1537971020000,now-2d,now-95m,now"
The default sample time when ingesting a single sample is the current time (i.e., the TSDB ingestion time) —now
.NoteAn ingested sample time cannot be earlier or equal to the latest previously ingested sample time for the same metric item. This applies also to samples ingested in the same command, so specify the ingestion times in ascending chronological order. For example, anadd command with will ingest only the first sample (-d "1,2" -m "now,now-1m"
1
) and not the second sample (2
) because the time of the second sample (now-2
) is earlier than that of the first sample (now
). To ingest both samples, change the order in the command to .-d "2,1" "now-1m,now"
NoteWhen ingesting samples at scale, use a CSV file or a Nuclio function rather than providing the ingestion input in the command line. -
-
Using the
-f|--file flag to provide the path to a CSV metric-samples input file that contains one or more items (rows) of the following format:<metric name>,[<labels>],<sample data value>[,<sample time>]
The CSV columns (attributes) are the equivalent of the arguments and flags described for the command-line arguments method in the previous bullet and their values are subject to the same guidelines. Note that all rows in the CSV file must have the same number of columns. For ingestion of multiple metrics, specify the ingestion times.
Examples
The following commands ingest three samples and a label for a
tsdbctl add temperature -t tsdb_example "degrees=Celsius" -d "32,29.5,25.3" -m "now-2d,now-1d,now"
tsdbctl add cpu -t tsdb_example -d "90,82.5" -m "now-2d,now-1d"
tsdbctl add cpu "host=A,os=linux" -t tsdb_example -d "23.87,47.3" -m "now-18h,now-12h"
tsdbctl add cpu "host=A" -t tsdb_example -d "50.2" -m "now-6h"
tsdbctl add cpu "os=linux" -t tsdb_example -d "88.8,91" -m "now-1h,now-30m"
tsdbctl add cpu "host=A,os=linux,arch=amd64" -t tsdb_example -d "70.2,55" -m "now-15m,now"
The same ingestion can also be done by providing the samples input in a CSV file, as demonstrated in the following command:
tsdbctl add -t tsdb_example -f ~/metric_samples.csv
The command uses this example
temperature,degrees=Celsius,32,now-2d
temperature,degrees=Celsius,29.5,now-1d
temperature,degrees=Celsius,25.3,now
cpu,,90,now-2d
cpu,,82.5,now-1d
cpu,"host=A,os=linux",23.87,now-18h
cpu,"host=A,os=linux",47.3,now-12h
cpu,host=A,50.2,now-6h
cpu,os=linux,88.8,now-1h
cpu,os=linux,91,now-30m
cpu,"host=A,os=linux,arch=amd64",70.2,now-15m
cpu,"host=A,os=linux,arch=amd64",55,now
The following command demonstrates ingestion of samples for an
tsdbctl add -t tsdb_example_aggr -f tsdb_example_aggr.csv
The command uses this example
m1,"os=darwin,host=A",1,1514802220000
m1,"os=darwin,host=A",2,1514812086000
m1,"os=darwin,host=A",3,1514877315000
m1,"os=linux,host=A",1,1514797500000
m1,"os=linux,host=A",2,1514799605000
m1,"os=linux,host=A",3,1514804625000
m1,"os=linux,host=A",4,1514818759000
m1,"os=linux,host=A",5,1514897354000
m1,"os=linux,host=A",6,1514897858000
m1,"os=windows,host=A",1,1514803048000
m1,"os=windows,host=A",2,1514808826000
m1,"os=windows,host=A",3,1514812736000
m1,"os=windows,host=A",4,1514881791000
m1,"os=darwin,host=B",1,1514802842000
m1,"os=darwin,host=B",2,1514818576000
m1,"os=darwin,host=B",3,1514891100000
m1,"os=linux,host=B",1,1514798275000
m1,"os=linux,host=B",2,1514816100000
m1,"os=linux,host=B",3,1514895734000
m1,"os=linux,host=B",4,1514900599000
m1,"os=windows,host=B",1,1514799605000
m1,"os=windows,host=B",2,1514810326000
m1,"os=windows,host=B",3,1514881791000
m1,"os=windows,host=B",4,1514900597000
Getting TSDB Configuration and Metrics Information
Use the CLI's
You can optionally use the
The following command returns the full schema and metrics information for the tsdb_example_aggr TSDB:
tsdbctl info -t tsdb_example_aggr -m -n
Querying a TSDB
Use the CLI's "noise"
), or set the __name__
attribute (for example, "(__name__=='cpu1') OR (__name__=='cpu2')"
or "starts(__name__,'cpu')"
).
To reference labels in the query filter, just use the label name as the attribute name (for example, "os=='linux' AND arch=='amd64'"
).
-
Currently, only labels of type string are supported; see the Software Specifications and Restrictions. Therefore, ensure that you embed label attribute values in your filter expression within quotation marks even when the values represent a number (for example,
" ), and don't apply arithmetic operators to such attributes (unless you want to perform a lexicographic string comparison).node == '1'
" -
Queries that set the
metric argument use range scan and are therefore faster. -
In the current release, the
query command doesn't support cross-series aggregation (-a|--aggregates with*_all aggregation functions) or the-w|--aggregation-window and--groupBy flags.
When using the "1==1"
; to query the full TSDB content, also set the 0
.
You can optionally use the "now"
or "now-[0-9]+[mhd]"
(where 'm
' = minutes, 'h
' = hours, and 'd
' = days); the start time can also be set to zero (0
) for the earliest sample time in the TSDB.
Alternatively, you can use the <n>
minutes, hours, or days ("[0-9]+[mdh]"
).
The default end time is the current time (now
) and the default start time is one hour earlier than the end time.
Therefore, the default time range when neither flag is set is the last hour.
Note that the time range applies to the samples' generation times ("the sample times") and not to the times at which they were ingested into the TSDB.
By default, the command returns the query results in plain-text format ("text"
), but you can use the "csv"
(CSV) or "json"
(JSON).
Examples
The following query returns all metric samples contained in the tsdb_example TSDB:
tsdbctl query -t tsdb_example -f "1==1" -b 0
The following queries both return tsdb_example TSDB 'A'
and an "linux"
:
tsdbctl query cpu -t tsdb_example -f "host=='A' AND os=='linux'" -b now-1h
tsdbctl query cpu -t tsdb_example -f "host=='A' AND os=='linux'" -l 1h
The following query returns, in CSV format, all tsdb_example TSDB metric samples that have a
tsdbctl query -t tsdb_example -f "exists(degrees)" -b 2022-01-01T00:00:00Z -e 2022-12-31T23:59:59Z -o csv
Aggregation Queries
You can use the optional "sum,stddev,stdvar"
.
See Supported Aggregation Functions for details.
You can use the "[0-9]+[mhd]"
(where 'm
' = minutes, 'h
' = hours, and 'd
' = days); for example, "3h"
(3 hours).
The default aggregation interval is the difference between the query's end and start times; for example, for the default query start and end times of now-1h
and now
, the default aggregation interval will be one hour (1h
).
You can submit aggregation queries also for a TSDB without pre-aggregates.
However, when the TSDB has pre-aggregates that match the query aggregators and the query's aggregation interval is a sufficient multiplier of the TSDB's aggregation granularity, the query processing is sped-up by using the TSDB's pre-aggregates (the aggregation data that's stored in the TSDB's aggregation attributes) instead of performing a new calculation.
See also the Aggregation Notes for the
The following query returns for each tsdb_example TSDB metric item whose metric name begins with "cpu", the minimal and maximal sample values and the standard deviation over two-hour aggregation intervals for samples that were generated in the last two days:
tsdbctl query -t tsdb_example -f "starts(__name__,'cpu')" -a "min,max,stddev" -i 2h -l 2d
The following queries return for each
tsdbctl query m1 -t tsdb_example_aggr -a "count,avg" -i 1d -b 2018-01-01T00:00:00Z
Name: m1 Labels: host=B,os=windows,Aggregate=count
2018-01-01T00:00:00Z v=2.00
2018-01-02T00:00:00Z v=2.00
Name: m1 Labels: host=B,os=windows,Aggregate=avg
2018-01-01T00:00:00Z v=1.50
2018-01-02T00:00:00Z v=3.50
Name: m1 Labels: host=A,os=linux,Aggregate=count
2018-01-01T00:00:00Z v=4.00
2018-01-02T00:00:00Z v=2.00
Name: m1 Labels: host=A,os=linux,Aggregate=avg
2018-01-01T00:00:00Z v=2.50
2018-01-02T00:00:00Z v=5.50
Name: m1 Labels: host=A,os=darwin,Aggregate=count
2018-01-01T00:00:00Z v=2.00
2018-01-02T00:00:00Z v=1.00
Name: m1 Labels: host=A,os=darwin,Aggregate=avg
2018-01-01T00:00:00Z v=1.50
2018-01-02T00:00:00Z v=3.00
Name: m1 Labels: host=A,os=windows,Aggregate=count
2018-01-01T00:00:00Z v=3.00
2018-01-02T00:00:00Z v=1.00
Name: m1 Labels: host=A,os=windows,Aggregate=avg
2018-01-01T00:00:00Z v=2.00
2018-01-02T00:00:00Z v=4.00
Name: m1 Labels: host=B,os=linux,Aggregate=count
2018-01-01T00:00:00Z v=2.00
2018-01-02T00:00:00Z v=2.00
Name: m1 Labels: host=B,os=linux,Aggregate=avg
2018-01-01T00:00:00Z v=1.50
2018-01-02T00:00:00Z v=3.50
Name: m1 Labels: host=B,os=darwin,Aggregate=count
2018-01-01T00:00:00Z v=2.00
2018-01-02T00:00:00Z v=1.00
Name: m1 Labels: host=B,os=darwin,Aggregate=avg
2018-01-01T00:00:00Z v=1.50
2018-01-02T00:00:00Z v=3.00
Name: m1 Labels: host=B,os=windows,Aggregate=count
2018-01-01T08:00:00Z v=1.00
2018-01-01T12:00:00Z v=1.00
2018-01-02T08:00:00Z v=1.00
2018-01-02T12:00:00Z v=1.00
Name: m1 Labels: host=B,os=windows,Aggregate=avg
2018-01-01T08:00:00Z v=1.00
2018-01-01T12:00:00Z v=2.00
2018-01-02T08:00:00Z v=3.00
2018-01-02T12:00:00Z v=4.00
Name: m1 Labels: host=A,os=linux,Aggregate=count
2018-01-01T08:00:00Z v=2.00
2018-01-01T10:00:00Z v=1.00
2018-01-01T14:00:00Z v=1.00
2018-01-02T12:00:00Z v=2.00
Name: m1 Labels: host=A,os=linux,Aggregate=avg
2018-01-01T08:00:00Z v=1.50
2018-01-01T10:00:00Z v=3.00
2018-01-01T14:00:00Z v=4.00
2018-01-02T12:00:00Z v=5.50
Name: m1 Labels: host=A,os=darwin,Aggregate=count
2018-01-01T10:00:00Z v=1.00
2018-01-01T12:00:00Z v=1.00
2018-01-02T06:00:00Z v=1.00
Name: m1 Labels: host=A,os=darwin,Aggregate=avg
2018-01-01T10:00:00Z v=1.00
2018-01-01T12:00:00Z v=2.00
2018-01-02T06:00:00Z v=3.00
Name: m1 Labels: host=A,os=windows,Aggregate=count
2018-01-01T10:00:00Z v=1.00
2018-01-01T12:00:00Z v=2.00
2018-01-02T08:00:00Z v=1.00
Name: m1 Labels: host=A,os=windows,Aggregate=avg
2018-01-01T10:00:00Z v=1.00
2018-01-01T12:00:00Z v=2.50
2018-01-02T08:00:00Z v=4.00
Name: m1 Labels: host=B,os=linux,Aggregate=count
2018-01-01T08:00:00Z v=1.00
2018-01-01T14:00:00Z v=1.00
2018-01-02T12:00:00Z v=2.00
Name: m1 Labels: host=B,os=linux,Aggregate=avg
2018-01-01T08:00:00Z v=1.00
2018-01-01T14:00:00Z v=2.00
2018-01-02T12:00:00Z v=3.50
Name: m1 Labels: host=B,os=darwin,Aggregate=count
2018-01-01T10:00:00Z v=1.00
2018-01-01T14:00:00Z v=1.00
2018-01-02T10:00:00Z v=1.00
Name: m1 Labels: host=B,os=darwin,Aggregate=avg
2018-01-01T10:00:00Z v=1.00
2018-01-01T14:00:00Z v=2.00
2018-01-02T10:00:00Z v=3.00
Name: m1 Labels: host=B,os=windows,Aggregate=count
2018-01-01T09:00:00Z v=1.00
2018-01-01T12:00:00Z v=1.00
2018-01-02T08:00:00Z v=1.00
2018-01-02T13:00:00Z v=1.00
Name: m1 Labels: host=B,os=windows,Aggregate=avg
2018-01-01T09:00:00Z v=1.00
2018-01-01T12:00:00Z v=2.00
2018-01-02T08:00:00Z v=3.00
2018-01-02T13:00:00Z v=4.00
Name: m1 Labels: host=A,os=linux,Aggregate=count
2018-01-01T09:00:00Z v=2.00
2018-01-01T11:00:00Z v=1.00
2018-01-01T14:00:00Z v=1.00
2018-01-02T12:00:00Z v=2.00
Name: m1 Labels: host=A,os=linux,Aggregate=avg
2018-01-01T09:00:00Z v=1.50
2018-01-01T11:00:00Z v=3.00
2018-01-01T14:00:00Z v=4.00
2018-01-02T12:00:00Z v=5.50
Name: m1 Labels: host=A,os=darwin,Aggregate=count
2018-01-01T10:00:00Z v=1.00
2018-01-01T13:00:00Z v=1.00
2018-01-02T07:00:00Z v=1.00
Name: m1 Labels: host=A,os=darwin,Aggregate=avg
2018-01-01T10:00:00Z v=1.00
2018-01-01T13:00:00Z v=2.00
2018-01-02T07:00:00Z v=3.00
Name: m1 Labels: host=A,os=windows,Aggregate=count
2018-01-01T10:00:00Z v=1.00
2018-01-01T12:00:00Z v=1.00
2018-01-01T13:00:00Z v=1.00
2018-01-02T08:00:00Z v=1.00
Name: m1 Labels: host=A,os=windows,Aggregate=avg
2018-01-01T10:00:00Z v=1.00
2018-01-01T12:00:00Z v=2.00
2018-01-01T13:00:00Z v=3.00
2018-01-02T08:00:00Z v=4.00
Name: m1 Labels: host=B,os=linux,Aggregate=count
2018-01-01T09:00:00Z v=1.00
2018-01-01T14:00:00Z v=1.00
2018-01-02T12:00:00Z v=1.00
2018-01-02T13:00:00Z v=1.00
Name: m1 Labels: host=B,os=linux,Aggregate=avg
2018-01-01T09:00:00Z v=1.00
2018-01-01T14:00:00Z v=2.00
2018-01-02T12:00:00Z v=3.00
2018-01-02T13:00:00Z v=4.00
Name: m1 Labels: host=B,os=darwin,Aggregate=count
2018-01-01T10:00:00Z v=1.00
2018-01-01T14:00:00Z v=1.00
2018-01-02T11:00:00Z v=1.00
Name: m1 Labels: host=B,os=darwin,Aggregate=avg
2018-01-01T10:00:00Z v=1.00
2018-01-01T14:00:00Z v=2.00
2018-01-02T11:00:00Z v=3.00
As explained above, you can also submit aggregation queries for TSDBs without pre-aggregates.
In such cases, the aggregations are calculated when the query is processed.
For example, the following query returns a three-day average for the tsdb_example TSDB's
tsdbctl query temperature -t tsdb_example -a avg -i 3d -b 0
Deleting a TSDB
Use the CLI's
Use
You can optionally use the "now"
or "now-[0-9]+[mhd]"
(where 'm
' = minutes, 'h
' = hours, and 'd
' = days); the start time can also be set to zero (0
) for the earliest sample time in the TSDB.
The default end (maximum) time is the current time (now
) and the default start (minimum) time is one hour earlier than the end time.
To avoid inadvertent deletes, by default the command prompts you to confirm the delete operation.
You can use the
You can also use the
Examples
The following command completely deletes the tsdb_example_aggr TSDB (subject to user confirmation in the command line):
tsdbctl delete -t tsdb_example_aggr -a
You can add the
tsdbctl delete -t tsdb_example_aggr -a --force
The following command deletes all tsdb_example TSDB partitions (and contained metric items) between the earliest sample time in the TSDB and Unix time 1569887999000 (2019-09-30T23:59:59Z):
tsdbctl delete -t tsdb_example -b 0 -e 1569887999000