You can modify your logging pipeline by sending a PATCH request with a specific pipeline's ID, for example:
You can modify a pipeline with the same payload you use in the POST request.
The Logging Service offers regional APIs that enable programmatic interaction with the platform. These APIs serve various purposes: task automation, system integration, and platform functionality extension. Additionally, the APIs allow you to filter logs based on different criteria, such as date range, log level, and source.
You need to use a regional endpoint to interact with the Logging Service REST API endpoints. Available endpoints are:
Berlin: https://logging.de-txl.ionos.com/pipelines
To interact with the Logging Service REST API endpoints, you need the following header values:
Header | Required | Type | Description |
---|---|---|---|
To generate a token, follow the instruction on How To Generate a Token.
Here are some of the most common API How-Tos for the Logging Service:
Note: Only contract-owner users are authorized to create a Logging Service instance.
Once the instance is crafted via API, you need to follow the following How-Tos:
To create a logging pipeline instance, you must meet the following terms:
You must have a valid and billable contract at IONOS.
You must have enough permission to manage the Data Center.
You must be the contract owner to be able to interact with the REST API.
The following request creates a logging pipeline with 2 log streams.
This request will eventually create an instance of a logging pipeline with 2 log streams: Docker and Kubernetes. As you can see, each source must use a unique tag of your choice.
The following is just a sample of the response; your values will be different.
The pipeline will remain in the NotReady
status for a short amount of time till its provisioning is finished. You can get the pipeline information and its status by the following request.
log sources like kubernetes
, docker
, and systemd
collect and provide the relevant labels, which can be used in report analysis and dashboard queries.
However, you might need to label a few more fields from the log sources additionally. You can define the additional labels as follows when you create a pipeline:
To get your pipeline information, you can use the following request.
On a successful request, you will receive important information on a Ready
pipeline.
You need the following pieces of information to configure the FluentBit and access the logs:
To see the full list of available sources, please refer to our .
Field | Usage |
---|
You need to obtain one last piece of information to send logs, the Key. Follow the on how to obtain a key.
Authorization
yes
string
A Bearer Token. A string token that is tied to your account Bearer $TOKEN
Content-Type
yes
string
Set this to application/json
.
tcpAddress | This is the address you need to set in the FluentBit configuration for the TCP log server. |
httpAddress | This is the address you need to set in the FluentBit configuration for the HTTP log server. |
grafanaAddress | This is the address where you can login with your IONOS credentials to access the logs. |
tag | This is the same tag you defined while creating the pipeline, which needs to be set in the FluentBit configuration. |
The purpose of the pipeline key is described in the log security section. In this section, we explain how to obtain a key for a pipeline.
To get a new key for a pipeline, you can use the following request.
In our example case, b849d5dd-cabe-4c13-b04d-a11bcdee721b
is the pipeline's ID.
The response contains the key you can configure in the FluentBit section for Shared_Key.
Note: We have a Share Once policy in generating a key, meaning there is no other way to retrieve the key. Once it's generated, please keep it somewhere safe.
Note: Once a new key is generated, all the previous keys will be revoked immediately.
Coming Soon
Each log stream in your pipeline is initially assigned a default data retention policy. However, you have the flexibility to define a custom retention policy for each log stream. The available options for retention periods include 7, 14, and 30 days.
By configuring the appropriate retention policy, you can determine how long the logs will be retained for each stream. The default retention period is set to 30 days.
However, within the log stream configuration, you have the flexibility to define a retention policy that aligns with your specific requirements. This allows you to determine how long the log data will be retained for a particular log stream.
You can use the following example:
You can also use the PATCH request to update the retention policy of an existing pipeline.