All pages
1 of 22

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Logging Service

The IONOS Logging Service provides a centralized and scalable solution for logging, monitoring, and analyzing application and infrastructure logs. It offers a wide range of features to help you collect and analyze logs from different log sources, monitor performance effectively, and gain insights into your system's behavior.

Bulk Data Export: You can now export data in bulk by submitting a request to IONOS Cloud Support. For more information, see Bulk Data Export.

Product Overview

Quick Links

Quick Start

Frequently Asked Questions (FAQs)

To get answers to the most commonly encountered questions about the Logging Service platform, see .

Logging Service FAQs

Logging Service

An overview of the product and its components.

DCD How-Tos

Learn how to use the DCD to create and manage logging pipelines.

API How-Tos

Learn how to use the API to create and manage logging pipelines.

Send Logs to the Platform

Steps to send logs to the logging platform.

Access Logs from the Platform

Steps to access logs from the logging platform.

Overview

Logging Service is a cloud-based service that allows you to store, manage, and analyze logs generated by your applications or systems. It is a scalable and cost-effective solution for organizations that need to manage and analyze large volumes of logs.

Logging Service is also an efficient log management solution, which contains several important features and benefits that can help organizations improve their log management practices and troubleshoot issues more quickly.

Log data is the key to an early detection of security incidents; thus, using the Logging Service can improve your operational efficiency, enhance your security, and increase your visibility into your system's performance and errors. Log data is available 24/7, secure, high-performance, and scalable. Your data is encrypted before, during, and after it is transmitted. The Fluent Bit agent gathers log data locally, while the Logging Service forms the basis for analysis and better visualization of your logs with increased security.

The managed Grafana in the Logging Service comes with a pre-configured datasource for the Telemetry API called IONOS Telemetry. You can use this datasource to query metrics from the IONOS Cloud Telemetry API. For more information, see Integration with IONOS Telemetry API.

Logging Service also enables you to configure an unlimited log retention period for your logs. For more information, see Modify the Log Retention Policy.

Components of Logging Service

The architecture of the Logging Service includes the following three main components that can be used to aggregate logs from various sources, analyze the gathered logs, and create visualizations for monitoring and report generation.

  • Data Collection: Data is collected from various log sources, such as applications, servers, and network devices, which are sent to a centralized Logging Service platform. Currently, the following log sources are supported: Kubernetes, Docker, Linux Systemd, HTTP (JSON REST API), and Generic.

  • Logging Service Platform: The Logging Service platform stores the logs gathered from various sources in a centralized location for easy access. This data can be accessed for analysis and troubleshooting. The platform includes log search, routing, storage, analytics, and visualization features. For more information, see .

  • Analytics and Visualization: Grafana, an analytics and visualization tool allows you to analyze the log data to identify patterns and trends and visualize the log data or generate reports. You can also use these reports to secure your log data from threats or troubleshoot underlying issues.

The illustration shows the default components of the Logging Service platform and the following is a brief description of the components:

  • Systems (1, 2, and n): These are the various log sources with Fluent Bit installed to gather, parse, and redirect logs to the Logging Service platform.

  • Logging Service Platform:

    • It contains an in-built log aggregator, Fluentd, easily compatible with various sources and targets. Hence, shipping logs from the source to the destination is hassle-free. It aggregates, processes, and ships data to the stipulated target.

Limitations

The following are the key limitations of Logging Service:

DCD How-Tos

You can set user privileges for accessing and managing Logging Service via the DCD.

Log Collection

A centralized Logging Service platform consists of two major components: Log Collection and Log Aggregation. The responsibilities of the platform provider and the user differ in the context of Logging Service.

Log Collection: The responsibility for log collection and its configuration lies with the user. This involves setting up mechanisms to gather log data from various sources within the infrastructure and applications. These mechanisms can include agents, log shippers, or APIs that send log data to a central location for storage and analysis.

Log Aggregation: The Logging Service platform provider provides and manages the log aggregation component. This component involves the centralization of log data from multiple sources, making it accessible for analysis and visualization. The platform handles log storage, indexing, and search functionalities.

Use Cases

Scenario 1: Streamlined log centralization and analysis

Precondition

Maintaining log visibility is challenging in a distributed system with numerous services and environments. Logs are scattered across different nodes, making troubleshooting, performance monitoring, and security analysis complex tasks.

Set User Privileges for Logging Service

Users need appropriate privileges to access, create, and modify Logging Service. Without necessary privileges, users have a read-only access and cannot provision changes. You can grant appropriate privileges via the User Manager.

Warning: User privileges set using the IONOS Cloud API or the DCD apply to pipeline access only, not to Grafana access.

To allow users to access, create, and modify the Logging Service, follow these steps:

1. In the DCD, go to Menu > Management > Users & Groups. 2. Select the Groups tab in the User Manager window. 3. Select the appropriate group to assign relevant privileges. 4. In the

Log Pipelines

A log pipeline refers to an instance or configuration of the Logging Service you can create using the or the . To create an instance of the Logging Service, you can request the designated regional endpoint based on your desired location:

  • Berlin: https://logging.de-txl.ionos.com/pipelines

  • Frankfurt: https://logging.de-fra.ionos.com/pipelines

Modify a Logging Pipeline Instance

You can modify your logging pipeline by sending a PATCH request with a specific pipeline ID.

Note: To modify a logging pipeline, you can use the same payload that you use in the POST request for creating a logging pipeline. For more information, see .

The following is a sample request. Remember to replace the {pipelineID} with a valid ID of the respective logging pipeline.

Solution

Logging Service offers a streamlined approach to centralizing logs. Easily configure agents on your log sources to collect and forward logs to a central repository. These logs are securely transferred, efficiently stored, and indexed for analysis. You can create custom dashboards for real-time log visualization and analysis, helping you quickly detect and address issues, maintain security standards, and optimize your application's performance.

Scenario 2: Flexible log retention and storage

Precondition

Effectively managing log retention and storage is a critical operational challenge. Storing logs for an extended period can be costly and cumbersome, while inadequate retention policies may lead to losing valuable historical data needed for compliance, auditing, and troubleshooting.

Solution

Logging Service offers flexibility in log retention and storage. Configure retention policies to remove logs based on your organization's requirements automatically. It ensures you retain logs for as long as necessary without incurring high storage costs. Additionally, you can use the service to search and retrieve older logs when needed, simplifying compliance audits and historical analysis.

Scenario 3: Real-time log monitoring during development and deployment

Precondition

DevOps teams rely on real-time visibility into application logs during the development and deployment phases. Timely and continuous access to logs is essential for debugging, identifying issues, and ensuring smooth deployments.

Solution

Logging Service provides DevOps teams with near-real-time log monitoring capabilities. Integrate the service seamlessly into your development and deployment pipelines to capture and analyze logs as applications are built and deployed. As your application components interact and generate logs, Logging Service immediately ingests and makes them available for analysis. DevOps teams can set up alerts and notifications based on specific log events, ensuring rapid response to critical issues. This near-real-time log monitoring helps streamline development and deployment processes, reduces downtime, and ensures the successful release of reliable applications.

Set User Privileges for Logging Service

Manage user access and privileges for the Logging Service.

Create Logging Pipeline

Create a logging pipeline.

View Logging Pipelines

View the list of created logging pipelines.

Update a Logging Pipeline

Manage an existing logging pipeline by updating the instance details.

Delete a Logging Pipeline

Delete an existing logging pipeline.

Fluentd feeds logs to Loki, which stores and aggregates them before forwarding them to the visualization tool Grafana. Loki also works as an aggregation tool that indexes and groups log streams based on the labels.

  • The logs are displayed in the Grafana dashboard. You may generate reports, edit your dashboards according to your needs, and visualize the data accordingly.

  • Aspect

    Description

    Limit

    HTTP Rate Limit

    Default rate limit for HTTP requests per pipeline during log ingestion.

    50 requests/second

    TCP Bandwidth

    Default TCP bandwidth limit per pipeline, approximately in terms of logs per second.

    ~10,000 logs/second

    Maximum Pipelines

    The maximum number of pipelines allowed per contract.

    10 pipelines

    Log Streams per Pipeline

    The maximum number of log streams allowed per pipeline.

    5 log streams/pipeline

    Log Collection
    The default components of a Logging Service platform
    curl --location \
    --request PATCH 'https://logging.de-txl.ionos.com/pipelines/{pipelineID}' \
    --header 'Authorization: Bearer $TOKEN' \
    --header 'Content-Type: application/json' \
    --data '{
        "properties": {
            "name": "new-logging-name",
            "logs": [
                {
                    "source": "kubernetes1",
                    "tag": "kub1",
                    "protocol": "tcp",
                    "destinations": [
                        {
                            "type": "loki",
                            "retentionInDays": 7
                        }
                    ]
                }
            ]
        }
    }'
    Set Up a Logging Pipeline Instance
    Log Agent

    Logs must be targeted and collected to be sent to the Logging Service platform for aggregation and analysis. Log agents responsible for collecting and forwarding logs to the central logging platform typically facilitate this process.

    While various log agents are available, the Logging Service platform supports the Fluent Bit Log Agent. Fluent Bit is a lightweight and efficient log forwarder that can be installed on Linux, macOS, and Windows systems. For more information, see Fluent Bit's official website. It provides the necessary functionality to collect logs from different sources and push them to the Logging Service platform for further processing and analysis.

    Note:

    • Fluent Bit installation and configuration vary based on your Log Sources.

    • Ensure you follow the instructions provided by the Logging Service platform provider and refer to any additional documentation or guidelines they may offer for integrating Fluent Bit log agent into your logging infrastructure.

    Fluent Bit Configuration

    To ensure that the logs are shipped correctly and securely, ensure that you configure the following appropriately in Fluent Bit:

    • Log Server Endpoint: It refers to the address of your logging pipeline, where the logs will be sent after they are collected. You can obtain this endpoint from the REST API response.

    • Tag: To ensure an appropriate synchronization between the agent and the log server, configure a tag in the Fluent Bit log agent. It can be utilized for reporting purposes and aids in identifying and categorizing the logs.

    • Key: In addition to the TLS connection, Fluent Bit needs a Shared_Key configuration for authentication purposes. This key ensures that only authorized logs are sent to the logging pipeline. You can obtain a token via the REST API.

    Here is an example of a Fluent Bit configuration that needs an endpoint, a tag, and a key:

    Note: The user must perform any data masking or sanitization.

    [OUTPUT]
        Name            forward
        Match           *
        Port            9000
        Tag             <TAG>
        Host            <TCP_ENDPOINT>
        tls             on
        Shared_Key      <KEY>
    Privileges
    tab, select
    Access and manage Logging Service
    to allow the associated group members to access and manage Logging Service.
    Grant privileges to access and manage the Logging Service

    Note: You can remove the privileges from the group by clearing Access and manage Logging Service.

    Result: Appropriate privilege is granted to the group and the users in the respective group.

    Create and manage sub-users

    For improved user management and delegation, you can establish sub-users and grant them the necessary permissions to use the Logging Service. Sub-users can be given varying levels of access for their segment of the logging pipeline by the primary account owners. Only the pipelines that the primary account owner has assigned to sub-users are visible to them and can be managed, but they cannot access the primary account or pipelines created by other sub-users.

    To create sub-users, follow these steps:

    1. In the DCD, go to Menu > Management > Users & Groups. 2. Select the Groups tab in the User Manager window. 3. Select the appropriate group to assign relevant privileges. 4. In the Members tab, click + Add User and select a user(s) from the list.

    To delete an associated sub-user, click Remove User.

    Create sub-users

    Result: A sub-user(s) is created.

    London: https://logging.gb-lhr.ionos.com/pipelines

  • Worcester: https://logging.gb-bhx.ionos.com/pipelines

  • Paris: https://logging.fr-par.ionos.com/pipelines

  • Logroño: https://logging.es-vit.ionos.com/pipelines

  • Lenexa: https://logging.us-mci.ionos.com/pipelines

  • When creating a log pipeline instance, you can define multiple log streams within each pipeline. Each stream functions as a separate log source, allowing you to organize and manage different sources of logs within your logging system.

    To differentiate the log sources and enable effective reporting, it is necessary to provide a unique tag for each log source within the pipeline instance. The tag serves as an identifier or label for the log source, allowing you to distinguish and track the logs from different sources easily.

    After the pipeline is set up, a unique endpoint is assigned to each pipeline, thus establishing a connection, either HTTP or TCP, with an independent log server. This endpoint serves as the designated destination for sending logs generated by all the log sources within the pipeline. However, to ensure proper categorization and differentiation, each pipeline configuration log source must utilize its designated tag. By adhering to this practice, the logs generated by each source can be accurately identified and traced, even when they are directed to the same endpoint.

    Logging Service Overview Design

    Limitations

    The Logging Service platform imposes specific limitations on the number of pipelines you can create and the log rate you can send to the log server.

    These limitations are determined by your pricing plan and are designed to ensure that all users receive optimal performance and resource allocation.

    Ingestion Rate

    By default, the platform sets an ingestion rate limit of 50 HTTP requests per second for each pipeline to prevent overloading the log server with excessive log data.

    DCD
    API

    Log Security

    The Logging Service is a versatile and accessible platform that allows you to conveniently store and manage logs from various sources. This platform offers a seamless log aggregation solution for logs generated within the IONOS infrastructure, by your bare metal system, or another cloud environment. With its flexibility, you can effortlessly push logs from anywhere, ensuring comprehensive log monitoring and analysis.

    The following two encryption mechanisms safeguard all HTTP or TCP communications that push logs:

    • Transport Layer Security (TLS)

    • KEY

      • If using HTTP, then the APIKEY must be specified in the header.

      • If using TCP, specify the Shared_Key.

    The key brings an extra layer of security with which you can revoke or regenerate the existing key.

    TCP (Fluent Bit)

    When using TCP or TLS, you must enable tls and provide a Shared_Key in the Fluent Bit configuration. The key can be obtained through the REST API. For more information, see .

    Note: To view a complete list of parameters, see .

    HTTP (cURL)

    If using HTTP (JSON), provide the key in the header as shown in the following example:

    HTTP (Fluent Bit)

    This is an equivalent example of configuring Fluent Bit with HTTP outbound:

    Note: To view a complete list of parameters, see .

    Integration with IONOS Telemetry API

    The Telemetry API is an API that allows you to interact with the IONOS Cloud Telemetry service, and it is compatible with Prometheus specifications.

    The Telemetry API allows retrieval of instance metrics; it is a read-only API and does not support any write operations. Although the Prometheus specification contains many more API resources and operations, the Telemetry API selectively supports the following GET operations at the moment:

    /api/v1/label
    /api/v1/query
    /api/v1/query_range
    /api/v1/metadata
    /api/v1/series

    Integration with IONOS Telemetry API

    The managed Grafana in the Logging Service comes with a pre-configured datasource for the Telemetry API called IONOS Telemetry. You can use this datasource to query metrics from the IONOS Cloud Telemetry API.

    Authentication

    The Telemetry API uses the same authentication as the IONOS Cloud API. You can use the same API token to authenticate with the Telemetry API. This means you need to update the IONOS Telemetry datasource with your API token:

    Follow the instructions in to generate a token.

    Once the header is configured, select Save & test.

    Delete a Logging Pipeline Instance

    To delete your logging pipeline, you need the ID of the respective pipeline.

    Request

    The following is a sample request. Remember to replace the {pipelineID} with a valid ID of the specific pipeline that must be deleted.

    Features and Benefits

    This section lists the key features and benefits of using the Logging Service.

    Features

    • Scalability: It is designed to handle extensive volumes of logs and scale up or down according to needs.

    Obtain a new Key
    Fluent Bit's official website
    Fluent Bit's official website
    Header: Authorization Value: Bearer <TOKEN>
    Create new tokens
    Logging service telemetry API Grafana
    Response
    curl --location \
    --request DELETE 'https://logging.de-txl.ionos.com/pipelines/{pipelineId}' \
    --header 'Content-Type: application/json' \
    --header 'Authorization: Bearer $TOKEN'
    [OUTPUT]
        Name            forward
        Match           *
        Port            9000
        Tag             <TAG>
        Host            <TCP_ENDPOINT>
        tls             on
        Shared_Key      <KEY>
    curl --location \ 
    --request POST 'https://12be6dbe134f-logs.3b0b424eb27f.logging.de-txl.ionos.com/myhttp' \
    --header 'Content-Type: application/json' \
    --header 'APIKEY: <KEY>' \
    --data '{
        "status": "Ready",
        "ts": 1580306777.04728,
        "pod": {
            "name": "Example Name",
            "namespace": "data"
        },
        "msg": "Pod status updated",
        "level": "error",
        "label_1": "test label"
    }'
    [OUTPUT]
        Name            http
        Match           *
        Host            <HTTPS_ENDPOINT>
        URI             /<TAG>
        Format          json
        Header          APIKEY <KEY>
        tls             on
        Port            443
    {
        "id": "1232-b8d5-42a0-9781-rgft",
        "type": "Pipeline",
        "metadata": {
            "createdDate": "2023-08-31T11:01:53Z",
            "createdBy": "[email protected]",
            "createdByUserId": "124829",
            "createdByUserUuid": "1243-fd97-48ce-a375-64r6",
            "lastModifiedDate": "2023-08-31T11:01:53Z",
            "lastModifiedBy": "[email protected]",
            "lastModifiedByUserId": "124829",
            "lastModifiedByUserUuid": "1243-fd97-48ce-a375-64r6",
            "status": "DESTROYING"
        },
        "properties": {
            "name": "any",
            "logs": [
                {
                    "public": true,
                    "source": "docker",
                    "tag": "dock",
                    "destinations": [
                        {
                            "type": "loki",
                            "retentionInDays": 14
                        }
                    ],
                    "protocol": "tcp"
                },
                {
                    "public": true,
                    "source": "kubernetes",
                    "tag": "k8s",
                    "destinations": [
                        {
                            "type": "loki",
                            "retentionInDays": 14
                        }
                    ],
                    "protocol": "tcp"
                }
            ],
            "tcpAddress": "tcp-434-logs.3434.logging.de-txl.ionos.com:9000",
            "httpAddress": "",
            "grafanaAddress": "grafana.logging.de-txl.ionos.com"
        }
    }

    Availability: The logs are available 24/7 and can be accessed from anywhere using a web-based interface.

  • Security: The logs are secured using necessary encryption, access controls, and audit trails.

  • Customization: The service can be customized according to your needs. For example, defining log retention periods, setting up alerts, or creating custom dashboards.

  • Sub-user Management: The Logging Service allows the primary account owner to create sub-users and delegate pipeline management responsibilities.

  • Benefits

    • Reduced Costs: It eliminates the need to invest in hardware and software for log management. Also, you only pay for the services utilized; thus, resulting in significant cost savings.

    • Increased Efficiency: It automates log management tasks such as log collection, storage, and analysis; thus, reducing the time and effort required to manage logs manually.

    • Improved Troubleshooting: It provides real-time access to log data, which facilitates speedy problem identification and resolution.

    • Compliance: It helps organizations meet compliance requirements by providing secure log storage and access controls.

    • Manage Log sources: You can create different pipelines for different environments or sources. It can also be more competitive as each pipeline instance has its own Key for the Logging Server. Also, each pipeline can have multiple log sources differentiated by a tag.

    • Auto Log Source Labeling: Seamless label synchronization is available for standard log sources, such as Docker, Kubernetes, and Linux Systemd. The log sources are labeled automatically and can be reported or analyzed in Grafana with the same labels. Labels are relative to each source. For example, namespace for Kubernetes or container_id for Docker. You can also define your custom labels while creating a pipeline without standard log sources.

    • Grafana Features: Grafana, a visualization platform, is rich in capabilities that make the service more advantageous from a competitive standpoint. The supported features include dashboards, alerting options, custom datastores, etc.

    • Data Encryption:

      • Server-Side Encryption (SSE): All the data are encrypted at the data store level using an S3 Server-Side Encryption.

      • Client-Side Encryption (CSE): Besides SSE, an additional CSE encrypts data before it is transferred to the data store. It is impossible to decrypt the data even if the SSE is bypassed to gain access to the data store.

    • Sub-user Management: A sub-user is a user who has access to the Logging Service but is not an administrator or an owner.

      • Create Sub-users: The primary account owner can create sub-user accounts and assign permissions to them.

      • Limited Access: Sub-users can only view and manage the pipelines assigned to them by the primary account owner. They cannot access pipelines created by other sub-users or the primary account.

    Note: All sub-user accounts in Grafana have at least a Viewer role, which allows them to see dashboards. The primary account owner can assign additional roles to sub-users as needed.

    Create a Logging Pipeline

    A logging pipeline refers to an instance of the logging service set up to receive, process, and store log data from various sources.

    To create a logging pipeline, you need to follow these steps:

    Prerequisite: Ensure you have the corresponding permissions to create and manage the Logging Service. Only contract administrators, owners, and users with Access and manage Logging Service privileges can create a logging pipeline.For more information, see Set User Privileges.

    2. Click Create logging pipeline from the Logging Service overview page.

    3. Configure the following details for a logging pipeline:

    4. Click Create to create a logging pipeline.

    Result: The logging pipeline is successfully created. An authorization key is provided which is needed to configure your logging agent's configuration to send logs.

    Note: You can create a maximum of 10 logging pipelines, and each pipeline can be configured with a maximum of 5 log sources.

    Define logging pipeline properties

    To define the logging pipeline properties, enter the following details:

    1. Name: Enter a name for the logging pipeline. The name is only for your reference and does not have to be unique. Example, my-logging-pipeline.

    2. Region: Select a region from the drop-down list to create the logging pipeline in that specific region. The region determines where your logs will be processed and stored.

    Define log sources

    The log sources refer to the origins from which the log agent will collect logs. You can define multiple log sources for a single logging pipeline, and each log source can be configured with specific settings.

    To define the log sources for a logging service, enter the following details:

    1. Source Type: Select the log source type from the drop-down list. It defines the source from which you want to send logs. The available options are:

    • Kubernetes: For sending logs from Kubernetes cluster.

    • Docker: For sending logs from Docker containers.

    • Systemd: For sending logs from systemd journal.

    • Generic: For sending logs from any other source that is not covered by the above options.

    2. Tag: Enter a unique tag for the log source. The tag is used to identify the log source and can be used for filtering and searching logs later. Example, myk8s.

    3. Protocol: Select one of the protocols from the drop-down list for sending logs. The available options are:

    • TCP: For sending logs over TCP.

    • HTTP: For sending logs over HTTP.

    4. Retention: Choose the retention period for the logs collected from this source. The retention period determines how long the logs will be stored before they are automatically deleted. The available options are:

    • 7 days

    • 14 days

    • 30 days

    • Unlimited

    5. Click Add source to define another log source. Repeat the steps from step 1 to step 4 to add the log source and click Create. You can add up to 5 log sources per logging pipeline.

    Warning: The Unlimited retention can lead to high storage costs and performance issues. It is advisable to set a reasonable retention period based on your logging requirements.

    Note: You will receive a key upon successful pipeline creation. Ensure the key is saved in a secure place, as you will receive it only upon pipeline creation.

    View Logging Pipelines

    Once a logging pipeline is successfully created, the instance is listed on the Logging Service overview page.

    To view the logging pipeline details, follow these steps:

    Result: A list of logging pipelines created is displayed. For every instance listed, you can view the following details:

    • NAME: Displays the name of the logging pipeline. Select the name to view the respective pipeline details.

    • STATE: Displays the state of the logging pipeline. Possible values are as follows:

      • Available: The logging pipeline is available and in good condition.

      • Provisioning: When the logging pipeline is either being updated or created.

      • Destroying: When the logging pipeline is being deleted.

    • REGION: Displays the region where the logging pipeline is created.

    • CREATION DATE: Displays the date of creation of the logging pipeline.

    • LOG SOURCES: Displays the number of log sources configured for the logging pipeline.

    • OPTIONS: Select to perform the following:

      • View & Edit: You can view and edit the selected logging pipeline details.

      • Copy TCP endpoint: Copy the TCP endpoint of the logging pipeline to use this endpoint in the log agent configuration, if applicable.

      • Copy HTTP endpoint: Copy the HTTP endpoint of the logging pipeline to use this endpoint in the log agent configuration, if applicable.

    View details of a selected Logging Pipeline

    For the selected logging pipeline, you can view the following details:

    • Properties: The following system information related to the logging pipeline is displayed:

      • Name: Displays the name of the logging pipeline.

      • Region: Displays the region of the logging pipeline.

      • Status: Displays the state of the logging pipeline.

    • Log sources: Refers to the log sources defined for the logging pipeline. For more information, see .

    FAQs

    What is a Logging Service?

    The Logging Service platform provides a centralized and scalable solution for collecting, monitoring, and analyzing logs from your application and infrastructure. It utilizes specialized components to aggregate, parse, and store the logs seamlessly.

    API How-Tos

    The Logging Service offers regional APIs that enable programmatic interaction with the platform. These APIs serve various purposes: task automation, system integration, and platform functionality extension. Additionally, the APIs allow you to filter logs based on different criteria, such as the date range, log level, and source.

    Sub-user access control

    A sub-user is a user who has access to the Logging Service but is not an administrator or an owner. IONOS's crucial access control restriction does not allow sub-users to view or modify pipelines belonging to other sub-user accounts, except the primary administrator, who retains full cross-pipeline privileges. Ensure that the sub-user pipeline ownership and access permissions align with your organizational needs.

    If a sub-user account creates a pipeline, access is restricted only to that sub-user and the primary administrator. Other sub-users cannot access or perform CRUD operations on the respective pipeline. For example, if sub-user A creates Pipeline 1, only sub-user A and the primary administrator account can view, edit, delete, or manage Pipeline 1. No other sub-user accounts will have access to it.

    Update a Logging Pipeline

    Once a logging pipeline is successfully created, the instance is listed on the Logging Service overview page.

    To update the logging pipeline details, follow these steps:

    2. In the Logging Service overview page, select the logging pipeline to update.

    3. In View & Edit, update the logging pipeline details needed:

    • Properties: Refers to the generic properties of the logging pipeline. To update these details, see .

    Delete a Logging Pipeline

    To delete a logging pipeline, follow these steps:

    2. From the Logging Service overview page, select the logging pipeline you want to delete. This page lists all the created logging pipelines.

    3. In the Options column for the selected logging pipeline, click and select Delete Pipeline.

    Result: The selected logging pipeline is successfully deleted and no longer displayed in the Logging Service overview page.

    Simple Client-Side Setup: The following are mandatory to set up a client-side: Endpoint, Tag, and a Shared_key.

    Primary Account Controls Sub-users: The primary account owner has complete administrative privileges to view, edit, delete, and manage all pipelines created by any sub-user under the account.

  • Better Access Control: Sub-user functionality allows larger teams and organizations to share access to the logging platform while limiting access to sensitive data. Primary account owners maintain oversight and control without getting overwhelmed.

  • Improved Delegation: Rather than broadly sharing keys and credentials, primary account owners can selectively grant access to sub-users for their portion of the logging pipeline. This partitioning facilitates wider use while enhancing security.

  • Failed: When the logging pipeline creation or update has failed.

  • Copy Grafana endpoint: Copy the Grafana endpoint of the logging pipeline to use this endpoint in the Grafana instance for visualization and reporting purposes.

  • Copy Pipeline UUID: Copy the UUID of the logging pipeline to use this identifier in the API calls.

  • Delete Pipeline: Deletes the selected logging pipeline. In the dialog box that appears, select Delete to confirm deletion. For more information, see Delete a Logging Pipeline.

  • UUID: The unique ID of the logging pipeline.

  • Creation date: Specifies the date and time of logging pipeline creation.

  • Grafana Endpoint: The Grafana endpoint URL for the logging pipeline, which is used to access the Grafana instance for visualization and reporting purposes.

  • TCP Endpoint: The TCP endpoint of the logging pipeline, which is used to send logs from the log agent, if applicable.

  • HTTP Endpoint: The HTTP endpoint of the logging pipeline, which is used to send logs from the log agent, if applicable.

  • Define log sources
    Logging pipelines overview
    View logging pipeline properties
    View logging pipeline log sources
    What changed in the product during its transition from Early Access to General Availability?

    The product contains enhancements that allow additional users under your contract number to create pipelines and log in to Grafana with their IONOS credentials. Additionally, you can also enable group members to access and manage Logging Service by providing specific access rights to the group. For more information, see Set User Privileges for Logging Service.

    What are the current limitations of Logging Service?

    Logging Service has its own set of limitations. For more information, see Logging Service Limitations.

    How safe is my data if the same Logging Service platform is shared with multiple users?

    Your data is secure and not shared with users using the same platform. The following defines the level of security provided by the platform:

    • Each pipeline uses its unique endpoint to send logs to the log server over the Transport Layer Security (TLS).

    • Each user has a separate partition because data is kept in different partitions.

    • Each chunk of data within the partition is secured by server-side and client-side encryptions.

    Can I restrict the traffic sent to Logging Service endpoints?

    Yes, we recommend configuring firewall rules using the authorized IP addresses associated with each endpoint to restrict the traffic to Logging Service endpoints. The following is a list of locations and their specific IP addresses:

    Location

    Region-specific Endpoints

    IP addresses

    Berlin

    https://logging.de-txl.ionos.com

    85.215.196.160 85.215.203.152 85.215.133.99 87.106.141.230

    Logroño

    https://logging.es-vit.ionos.com

    194.164.165.102 85.215.203.152 85.215.248.62 85.215.248.97

    London

    https://logging.gb-lhr.ionos.com

    217.160.157.83 217.160.157.82 217.160.157.89 82.165.230.137

    Worcester

    https://logging.bhx-lhr.ionos.com

    194.164.92.15 82.165.196.8 79.99.40.114 82.165.196.23

    Warning: User privileges set using the IONOS Cloud API or the DCD apply to pipeline access only, not to Grafana access.

    Endpoints

    A regional endpoint is necessary to interact with the Logging Service REST API endpoints. Currently, IONOS supports only the following regions for the Logging Service:

    • Berlin: https://logging.de-txl.ionos.com/pipelines

    • Frankfurt: https://logging.de-fra.ionos.com/pipelines

    • London: https://logging.gb-lhr.ionos.com/pipelines

    • Worcester: https://logging.gb-bhx.ionos.com/pipelines

    • Paris: https://logging.fr-par.ionos.com/pipelines

    • Logroño: https://logging.es-vit.ionos.com/pipelines

    • Lenexa: https://logging.us-mci.ionos.com/pipelines

    Note: We recommend using the authorized IP addresses associated with each endpoint if you need to configure firewall rules to restrict traffic sent to the Logging Service endpoints. It enables you to configure rules accordingly to ensure traffic is redirected only through authorized IP addresses for each endpoint. For more information about the authorized IP addresses, see FAQs.

    Request parameter headers

    To interact with the Logging Service REST API endpoints, the header must contain the following values:

    Header

    Required

    Type

    Description

    Authorization

    yes

    string

    A Bearer $TOKEN is a string that is tied to your account. For information on generating tokens, see .

    Content-Type

    yes

    string

    Set this to application/json.

    Quick Links

    Here are some of the most common API How-Tos for the Logging Service:

    We recommend you refer to the following after creating an instance via the API:

    Log Sources:
    This section defines the log sources for the logging pipeline. You can perform the following actions:
    • Edit log source: Use this option to edit an existing log source.

    • Delete log source: Use this option to delete an existing log source.

    • Add log source: You can create a new log source. To do so, follow the steps in Define log sources.

  • Renew Key: This option allows you to renew the authorization key for the logging pipeline. The key is used to configure your logging agent's FluentBit configuration to send logs.

  • 4. Click Save to update the logging pipeline details with the changes made.

    Result: The logging pipeline is successfully updated.

    Define logging pipeline properties
    Define logging pipeline properties
    Define log sources
    (FluentBit)
    Create a logging pipeline
    Logging pipeline properties
    Logging pipeline log sources
    Logging pipeline auth key
    Delete logging pipeline

    Retrieve Logging Pipeline Information

    To retrieve your logging pipeline information, you need the ID of the respective pipeline.

    Request

    The following is a sample request. Remember to replace the {pipelineID} with a valid ID of the specific pipeline whose information you want to access.

    curl --location \
    --request GET 'https://logging.de-txl.ionos.com/pipelines/{pipelineID}' \
    --header 'Content-Type: application/json' \
    --header 'Authorization: Bearer $TOKEN'

    Response

    If your request is successful, you will receive the relevant information on a Ready pipeline.

    To access logs using Fluent Bit, you need the following information:

    Note: A key is necessary to send logs through the newly formed logging pipeline. For more information about obtaining a key, see .

    Set Up a Logging Pipeline Instance

    It is necessary to create an instance of the logging pipeline before sending log data to the Logging Service platform. For more information, see .

    For more information about the complete list of available sources, see .

    This topic contains the following sections:

    Log Sources

    It is essential to identify the origin of the logs before choosing the right approach to installing and configuring the agent. However, to provide convenient parsing and data labeling, IONOS accepts logs from the following four log sources: Kubernetes, Docker, Linux Systemd, and HTTP. The configuration of these log sources varies accordingly.

    Note: Technically, you can send logs with Fluent Bit from any source if the following communication protocols are supported: TCP and HTTP. The only convenient parsing currently offered is using the specified log sources.

    Frankfurt

    https://logging.de-fra.ionos.com

    82.215.75.93 85.215.75.92 217.160.218.121 217.160.218.124

    Paris

    https://logging.fr-par.ionos.com

    5.250.177.220 5.250.178.35 5.250.177.226 5.250.178.34

    Lenexa

    https://logging.us-mci.ionos.com

    74.208.249.232 209.46.121.39 69.48.204.20 209.46.121.225

    Set Up a Logging Pipeline Instance

    Create an instance of a logging pipeline.

    Obtain a new Key

    Obtain a new key for a logging pipeline.

    Modify a Logging Pipeline Instance

    Update an existing logging pipeline.

    Retrieve Logging Pipeline Information

    Retrieve information about a specific logging pipeline.

    Delete a Logging Pipeline Instance

    Delete a specific logging pipeline.

    Modify the Log Retention Policy

    Customize the retention policy for each log stream.

    Integration with IONOS Telemetry API

    Use the pre-configured IONOS Telemetry API datasource to query metrics from the IONOS Cloud Telemetry API.

    Send Logs to Platform

    Learn how to send logs to the platform.

    Access Logs from Platform

    Learn how to access logs from the platform.

    Create New Tokens

    1. In the DCD, go to Menu > Observability > Logging Service.

    1. In the DCD, go to Menu > Observability > Logging Service.

    1. In the DCD, go to Menu > Observability > Logging Service.

    1. In the DCD, go to Menu > Observability > Logging Service.

    Field

    Usage

    tcpAddress

    Set the TCP log server address during Fluent Bit configuration.

    httpAddress

    Set the HTTP log server address during Fluent Bit configuration.

    tag

    Set the tag during Fluent Bit configuration. Remember to use the same tag you defined while creating the pipeline.

    Obtain a New Key

    Create a pipeline with custom labels

    Request to create a logging pipeline

    The following request creates an instance of a logging pipeline with two log streams: docker and kubernetes.

    Warning:

    • IONOS supports unique email addresses across all contracts in each region.

    Response

    The following is a sample response. The values returned by each response differ based on the request.

    You may notice that the pipeline's status is temporarily set to the PROVISIONING state while provisioning is in process. A GET request retrieves information about the pipeline and its status. For more information, see Retrieve logging pipeline information.

    Create a pipeline with custom labels

    Log sources like Kubernetes, Docker, and Linux Systemd collect and offer relevant labels. You can use these labels to analyze reports and query the dashboard. However, if you want to label additional fields from the log sources, you can define custom labels as follows when you create a pipeline:

    Log Pipelines
    Log Sources
    Request to create a logging pipeline
    Sample response
    Kubernetes

    This method lets you collect and ship your Kubernetes application's logs. Fluent Bit offers a wide range of information on how to set it up on your Kubernetes cluster. However, we also recommend you try our Kubernetes configuration examples before configuring the log source.

    Docker

    If you have a set of applications on Docker, we recommend trying our Docker configuration examples. You can also find more information about Docker configuration on Fluent Bit's official website.

    Linux Systemd

    To set up Fluent Bit on a Linux system with systemd or journals, you must install an appropriate package for your Linux distribution. For more information about how to accomplish it, see Fluent Bit's official website. We also recommend you try our Linux systemd sample configuration.

    HTTP

    You can also send logs through the HTTP REST endpoint. You can transfer logs that are in JSON format only via the HTTP REST endpoint.

    The following is an example:

    Fluent Bit
    {
      "id": "123-5e72-4c48-b28e-4567",
      "type": "Pipeline",
      "metadata": {
        "createdDate": "2023-05-15T08:24:20Z",
        "createdBy": "[email protected]",
        "createdByUserId": "2895",
        "createdByUserUuid": "3423-2ad6-4826-b12f-3245",
        "lastModifiedDate": "",
        "lastModifiedBy": "",
        "lastModifiedByUserId": "",
        "lastModifiedByUserUuid": "",
        "status": "Ready"
      },
      "properties": {
        "name": "demo",
        "logs": [
          {
            "public": true,
            "source": "docker",
            "tag": "dock",
            "destinations": [
              {
                "type": "loki"
              }
            ],
            "protocol": "tcp",
            "status": "Ready"
          },
          {
            "public": true,
            "source": "kubernetes",
            "tag": "k8s",
            "destinations": [
              {
                "type": "loki"
              }
            ],
            "protocol": "tcp",
            "status": "Ready"
          }
        ],
        "tcpAddress": "tcp-abcd-logs.defg.logging.de-txl.ionos.com:9000",
        "httpAddress": "",
        "grafanaAddress": "grafana.logging.de-txl.ionos.com"
      }
    }
    curl --location \ 
    --request POST 'https://logging.de-txl.ionos.com/pipelines' \
    --header 'Content-Type: application/json' \
    --header 'Authorization: Bearer $TOKEN' \
    --data '{
      "properties": {
        "name": "demo",
        "logs": [
          {
            "source": "docker",
            "tag": "dock",
            "protocol": "tcp",
            "labels": [
              "label1d"
            ],
            "destinations": [
              {
                "type": "loki",
                "retentionInDays": 7
              }
            ]
          }, 
                {
                 "source": "kubernetes",
                 "tag": "k8s",
                 "protocol": "tcp",
                "destinations":  
                [
                  {
                  "type": "loki",
                  "retentionInDays": 14
                  }
                ]
               }
        ]
      }
    }'
    {
        "id": "aaaaa-bbbb-1234-cccc-dddd",
        "type": "Pipeline",
        "metadata": {
            "createdDate": "2023-10-19T11:48:31Z",
            "createdBy": "[email protected]",
            "createdByUserId": "ID",
            "createdByUserUuid": "UUID",
            "lastModifiedDate": "2023-10-19T11:48:31Z",
            "lastModifiedBy": "[email protected]",
            "lastModifiedByUserId": "ID",
            "lastModifiedByUserUuid": "UUID",
            "status": "PROVISIONING"
        },
        "properties": {
            "name": "demo",
            "logs": [
                {
                    "public": true,
                    "source": "docker",
                    "tag": "dock",
                    "destinations": [
                        {
                            "type": "loki",
                            "retentionInDays": 7
                        }
                    ],
                    "labels": [
                        "label1d"
                    ],
                    "protocol": "tcp"
                },
                {
                    "public": true,
                    "source": "kubernetes",
                    "tag": "k8s",
                    "destinations": [
                        {
                            "type": "loki",
                            "retentionInDays": 14
                        }
                    ],
                    "protocol": "tcp"
                }
            ],
            "tcpAddress": "",
            "httpAddress": "",
            "grafanaAddress": "",
            "key": "key"
        }
    }
    curl --location \ 
    --request POST 'https://logging.de-txl.ionos.com/pipelines' \
    --header 'Content-Type: application/json' \
    --header 'Authorization: Bearer $TOKEN' \
    --data '{
        "properties": {
            "name": "demo",
            "logs": [
                {
                    "source": "docker",
                    "tag": "dock",
                    "protocol": "tcp",
                    "labels": [
                        "label1",
                        "label2"
                    ]
                }
            ]
        }
    }'
    curl --location \
    --request POST 'https://12be6dbe134f-logs.3b0b424eb27f.logging.de-txl.ionos.com/myhttp' \
    --header 'Content-Type: application/json' \
    --header 'APIKEY: <KEY>' \
    --data '{ 
               "status": "Ready",
               "ts": 1580306777.04728,
               "pod": {
                       "name": "Example Name",
                       "namespace": "data"
                      },
               "msg": "Pod status updated",
               "level": "error",
               "label_1": "test label"
               }'

    Obtain a new Key

    A key is necessary to send logs over the logging pipeline. For more information about the purpose of a pipeline key, see Log Security.

    Warning:

    • IONOS adheres to a Share Once policy to generate a key; there is no alternative method to retrieve it if lost. Hence, we recommend that you secure the generated key.

    • The previous key is instantly revoked when you generate a new key for a specific pipeline.

    Request

    To get a new key for a pipeline, you can use the following request. Remember to replace the {pipelineID} with a valid ID of a pipeline to access its key.

    Response

    The response contains the key necessary to configure the Shared_Key in Fluent Bit.

    curl --location
    --request POST 'https://logging.de-txl.ionos.com/pipelines/{pipelineID}/key' \
    --header 'Content-Type: application/json' \
    --header 'Authorization: Bearer $TOKEN'
    {
      "key": "key123"
    }