A key is necessary to send logs over the logging pipeline. For more information about the purpose of a pipeline key, see Log Security.
Warning:
IONOS adheres to a Share Once policy to generate a key; there is no alternative method to retrieve it if lost. Hence, we recommend that you secure the generated key.
The previous key is instantly revoked when you generate a new key for a specific pipeline.
To get a new key for a pipeline, you can use the following request. Remember to replace the {pipelineID}
with a valid ID of a pipeline to access its key
.
The response contains the key necessary to configure the Shared_Key
in Fluent Bit.
It is necessary to create an instance of the logging pipeline before sending log data to the Logging Service platform. For more information, see Log Pipelines.
For more information about the complete list of available sources, see Log Sources.
This topic contains the following sections:
The following request creates an instance of a logging pipeline with two log streams: docker
and kubernetes
.
Warning:
IONOS supports unique email addresses across all contracts in each region.
The following is a sample response. The values returned by each response differ based on the request.
You may notice that the pipeline's status is temporarily set to the PROVISIONING
state while provisioning is in process. A GET
request retrieves information about the pipeline and its status. For more information, see Retrieve logging pipeline information.
Log sources like Kubernetes, Docker, and Linux Systemd collect and offer relevant labels. You can use these labels to analyze reports and query the dashboard. However, if you want to label additional fields from the log sources, you can define custom labels as follows when you create a pipeline:
The Logging Service offers regional APIs that enable programmatic interaction with the platform. These APIs serve various purposes: task automation, system integration, and platform functionality extension. Additionally, the APIs allow you to filter logs based on different criteria, such as the date range, log level, and source.
A sub-user is a user who has access to the Logging Service but is not an administrator or an owner. IONOS's crucial access control restriction does not allow sub-users to view or modify pipelines belonging to other sub-user accounts, except the primary administrator, who retains full cross-pipeline privileges. Ensure that the sub-user pipeline ownership and access permissions align with your organizational needs.
If a sub-user account creates a pipeline, access is restricted only to that sub-user and the primary administrator. Other sub-users cannot access or perform CRUD operations on the respective pipeline. For example, if sub-user A creates Pipeline 1, only sub-user A and the primary administrator account can view, edit, delete, or manage Pipeline 1. No other sub-user accounts will have access to it.
A regional endpoint is necessary to interact with the Logging Service REST API endpoints. Currently, IONOS supports only the following regions for the Logging Service:
Berlin: https://logging.de-txl.ionos.com/pipelines
Frankfurt: https://logging.de-fra.ionos.com/pipelines
London: https://logging.gb-lhr.ionos.com/pipelines
Paris: https://logging.fr-par.ionos.com/pipelines
Logroño: https://logging.es-vit.ionos.com/pipelines
Note: We recommend using the authorized IP addresses associated with each endpoint if you need to configure firewall rules to restrict traffic sent to the Logging Service endpoints. It enables you to configure rules accordingly to ensure traffic is redirected only through authorized IP addresses for each endpoint. For more information about the authorized IP addresses, see .
To interact with the Logging Service REST API endpoints, the header must contain the following values:
Here are some of the most common API How-Tos for the Logging Service:
We recommend you refer to the following after creating an instance via the API:
Each log stream in your pipeline is initially assigned a default data retention policy of 30 days. The logs for each log stream are retained for the specified number of days. However, you can define a custom retention policy for each log stream. The available options for retention periods include 7, 14, and 30 days.
Note: You can alternatively use the PATCH
request to update the retention policy of an existing pipeline.
The following is an example:
To set the retention period for each log stream to unlimited, use the following request:
Note: The retention period is set to unlimited when the retentionInDays
value is set to 0
.
You must provide the following information while requesting access:
Contract number
Pipeline ID
Log stream tag
The date range for which you need access to the data
S3 bucket information:
Name
Region
S3 endpoint
Note: Ensure that the S3 Canonical User ID you receive from IONOS Cloud Support is assigned to the designated bucket. Additionally, confirm that write access has been granted to this specific bucket.
Header | Required | Type | Description |
---|
In Grafana, you can only access the data for a maximum of 30 days. To access the archived data (logs with unlimited retention), contact .
After receiving the required information, IONOS transfers the archived data into the designated S3 bucket. Following the successful upload, the data becomes accessible from within the S3 bucket. To add the grantee to your bucket, see the .
Note: Storing logs for an indefinite period will increase storage costs. Our pricing for long-term log storage aligns with our standard .
You can modify your logging pipeline by sending a PATCH
request with a specific pipeline ID.
Note: To modify a logging pipeline, you can use the same payload that you use in the POST
request for creating a logging pipeline. For more information, see Set Up a Logging Pipeline Instance.
The following is a sample request. Remember to replace the {pipelineID}
with a valid ID of the respective logging pipeline.
Authorization | yes | string |
Content-Type | yes | string | Set this to |
Create an instance of a logging pipeline. |
Obtain a new key for a logging pipeline. |
Update an existing logging pipeline. |
Retrieve information about a specific logging pipeline. |
Delete a specific logging pipeline. |
Customize the retention policy for each log stream. |
Use the pre-configured IONOS Telemetry API datasource to query metrics from the IONOS Cloud Telemetry API. |
To retrieve your logging pipeline information, you need the ID of the respective pipeline.
The following is a sample request. Remember to replace the {pipelineID}
with a valid ID of the specific pipeline whose information you want to access.
If your request is successful, you will receive the relevant information on a Ready
pipeline.
To access logs using Fluent Bit, you need the following information:
Note: A key is necessary to send logs through the newly formed logging pipeline. For more information about obtaining a key, see Obtain a New Key.
The Telemetry API is an API that allows you to interact with the IONOS Cloud Telemetry service, and it is compatible with Prometheus specifications.
The Telemetry API allows retrieval of instance metrics; it is a read-only API and does not support any write operations. Although the Prometheus specification contains many more API resources and operations, the Telemetry API selectively supports the following GET operations at the moment:
The managed Grafana in the Logging Service comes with a pre-configured datasource for the Telemetry API called IONOS Telemetry. You can use this datasource to query metrics from the IONOS Cloud Telemetry API.
The Telemetry API uses the same authentication as the IONOS Cloud API. You can use the same API token to authenticate with the Telemetry API. This means you need to update the IONOS Telemetry datasource with your API token:
Follow the instructions in Create new tokens to generate a token.
Once the header is configured, select Save & test.
A Bearer $TOKEN
is a string that is tied to your account. For information on generating tokens, see .
Field | Usage |
---|---|
tcpAddress
Set the TCP log server address during Fluent Bit configuration.
httpAddress
Set the HTTP log server address during Fluent Bit configuration.
tag
Set the tag during Fluent Bit configuration. Remember to use the same tag you defined while creating the pipeline.