Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Maintaining log visibility is challenging in a distributed system with numerous services and environments. Logs are scattered across different nodes, making troubleshooting, performance monitoring, and security analysis complex tasks.
Logging Service offers a streamlined approach to centralizing logs. Easily configure agents on your log sources to collect and forward logs to a central repository. These logs are securely transferred, efficiently stored, and indexed for analysis. You can create custom dashboards for real-time log visualization and analysis, helping you quickly detect and address issues, maintain security standards, and optimize your application's performance.
Effectively managing log retention and storage is a critical operational challenge. Storing logs for an extended period can be costly and cumbersome, while inadequate retention policies may lead to losing valuable historical data needed for compliance, auditing, and troubleshooting.
Logging Service offers flexibility in log retention and storage. Configure retention policies to remove logs based on your organization's requirements automatically. It ensures you retain logs for as long as necessary without incurring high storage costs. Additionally, you can use the service to search and retrieve older logs when needed, simplifying compliance audits and historical analysis.
DevOps teams rely on real-time visibility into application logs during the development and deployment phases. Timely and continuous access to logs is essential for debugging, identifying issues, and ensuring smooth deployments.
Logging Service provides DevOps teams with near-real-time log monitoring capabilities. Integrate the service seamlessly into your development and deployment pipelines to capture and analyze logs as applications are built and deployed. As your application components interact and generate logs, Logging Service immediately ingests and makes them available for analysis. DevOps teams can set up alerts and notifications based on specific log events, ensuring rapid response to critical issues. This near-real-time log monitoring helps streamline development and deployment processes, reduces downtime, and ensures the successful release of reliable applications.
A log pipeline refers to an instance or configuration of the Logging Service you can create using the REST API. To create an instance of Logging Service, you can request the designated endpoint, such as https://logging.de-txl.ionos.com/pipelines
. This is an example of an endpoint, and the actual endpoint may vary.
Within each pipeline instance, it is possible to define multiple log streams, where each stream functions as a separate log source. These log streams allow you to organize and manage different sources of logs within your logging system.
Note: Based on your pricing model, a specific limit is tied to each pipeline, which limits the log rate you can send to the log server.
To differentiate the log sources and enable effective reporting, it is necessary to provide a unique tag for each log source within the pipeline instance. The tag serves as an identifier or label for the log source, allowing you to distinguish and track the logs from different sources easily.
After the pipeline is set up, a unique endpoint is assigned to each pipeline, thus establishing a connection, either HTTP or TCP, with an independent log server. This endpoint serves as the designated destination for sending logs generated by all the log sources within the pipeline. However, to ensure proper categorization and differentiation, each pipeline configuration log source must utilize its designated tag. By adhering to this practice, the logs generated by each source can be accurately identified and traced, even when they are directed to the same endpoint.
A centralized Logging Service platform consists of two major components: Log Collection and Log Aggregation. The responsibilities of the platform provider and the user differ in the context of Logging Service.
Log Collection: The responsibility for log collection and its configuration lies with the user. This involves setting up mechanisms to gather log data from various sources within the infrastructure and applications. These mechanisms can include agents, log shippers, or APIs that send log data to a central location for storage and analysis.
Log Aggregation: The Logging Service platform provider provides and manages the log aggregation component. This component involves the centralization of log data from multiple sources, making it accessible for analysis and visualization. The platform handles log storage, indexing, and search functionalities.
Logs must be targeted and collected to be sent to the Logging Service platform for aggregation and analysis. Log agents responsible for collecting and forwarding logs to the central logging platform typically facilitate this process.
While various log agents are available, the Logging Service platform supports the Fluent Bit Log Agent. Fluent Bit is a lightweight and efficient log forwarder that can be installed on Linux, macOS, and Windows systems. For more information, see Fluent Bit's official website. It provides the necessary functionality to collect logs from different sources and push them to the Logging Service platform for further processing and analysis.
Note:
Fluent Bit installation and configuration vary based on your Log Sources.
Ensure you follow the instructions provided by the Logging Service platform provider and refer to any additional documentation or guidelines they may offer for integrating Fluent Bit log agent into your logging infrastructure.
To ensure that the logs are shipped correctly and securely, ensure that you configure the following appropriately in Fluent Bit:
Log Server Endpoint: It refers to the address of your logging pipeline, where the logs will be sent after they are collected. You can obtain this endpoint from the REST API response.
Tag: To ensure an appropriate synchronization between the agent and the log server, configure a tag in the Fluent Bit log agent. It can be utilized for reporting purposes and aids in identifying and categorizing the logs.
Key: In addition to the TLS connection, Fluent Bit needs a Shared_Key
configuration for authentication purposes. This key ensures that only authorized logs are sent to the logging pipeline. You can obtain a token via the REST API.
Here is an example of a Fluent Bit configuration that needs an endpoint
, a tag
, and a key
:
Note: The user must perform any data masking or sanitization.
The Logging Service is a versatile and accessible platform that allows you to conveniently store and manage logs from various sources. This platform offers a seamless log aggregation solution for logs generated within the IONOS infrastructure, by your bare metal system, or another cloud environment. With its flexibility, you can effortlessly push logs from anywhere, ensuring comprehensive log monitoring and analysis.
The following two encryption mechanisms safeguard all HTTP or TCP communications that push logs:
TLS (Transport Layer Security)
KEY
If using HTTP, then the APIKEY
must be specified in the header.
If using TCP, specify the Shared_Key
.
The key brings an extra layer of security with which you can revoke or regenerate the existing key.
When using TCP or TLS, you must enable tls
and provide a Shared_Key
in the Fluent Bit configuration. The key can be obtained via our REST API. For more information, see Obtain a new Key.
Note: To view a complete list of parameters, see Fluent Bit's official website.
If using HTTP (JSON), provide the key in the header as shown in the following example:
This is an equivalent example of configuring Fluent Bit with HTTP outbound:
Note: To view a complete list of parameters, see Fluent Bit's official website.
This section lists the key features and benefits of using the Logging Service.
Scalability: It is designed to handle extensive volumes of logs and scale up or down according to needs.
Availability: The logs are available 24/7 and can be accessed from anywhere using a web-based interface.
Security: The logs are secured using necessary encryption, access controls, and audit trails.
Customization: The service can be customized according to your needs. For example, defining log retention periods, setting up alerts, or creating custom dashboards.
Reduced Costs: It eliminates the need to invest in hardware and software for log management. Also, you only pay for the services utilized; thus, resulting in significant cost savings.
Increased Efficiency: It automates log management tasks such as log collection, storage, and analysis; thus, reducing the time and effort required to manage logs manually.
Improved Troubleshooting: It provides real-time access to log data, which facilitates speedy problem identification and resolution.
Compliance: It helps organizations meet compliance requirements by providing secure log storage and access controls.
Manage Log sources: You can create different pipelines for different environments or sources. It can also be more competitive as each pipeline instance has its own Key
for the Logging Server. Also, each pipeline can have multiple log sources differentiated by a tag
.
Auto Log Source Labeling: Seamless label synchronization is available for standard log sources, such as Docker, Kubernetes, and Linux Systemd. The log sources are labeled automatically and can be reported or analyzed in Grafana with the same labels. Labels are relative to each source. For example, namespace
for Kubernetes or container_id
for Docker. You can also define your custom labels while creating a pipeline without standard log sources.
Grafana Features: Grafana, a visualization platform, is rich in capabilities that make the service more advantageous from a competitive standpoint. The supported features include dashboards, alerting options, custom datastores, etc.
Data Encryption:
Server-Side Encryption (SSE): All the data are encrypted at the data store level using an S3 Server-Side Encryption.
Client-Side Encryption (CSE): Besides SSE, an additional CSE encrypts data before it is transferred to the data store. It is impossible to decrypt the data even if the SSE is bypassed to gain access to the data store.
Simple Client-Side Setup: The following are mandatory to set up a client-side: Endpoint
, Tag
, and a Shared_key
.
The IONOS Logging Service is a cloud-based service that allows you to store, manage, and analyze logs generated by your applications or systems. It is a scalable and cost-effective solution for organizations that need to manage and analyze large volumes of logs.
Logging Service is also an efficient log management solution, which contains several important features and benefits that can help organizations improve their log management practices and troubleshoot issues more quickly.
Log data is the key to an early detection of security incidents; thus, using the Logging Service can improve your operational efficiency, enhance your security, and increase your visibility into your system's performance and errors. Log data is available 24/7, secure, high-performance, and scalable. Your data is encrypted before, during, and after it is transmitted. The Fluent Bit agent gathers log data locally, while the Logging Service forms the basis for analysis and better visualization of your logs with increased security.
The architecture of the Logging Service includes the following three main components that can be used to aggregate logs from various sources, analyze the gathered logs, and create visualizations for monitoring and report generation.
Data Collection: Data is collected from various log sources, such as applications, servers, and network devices, which are sent to a centralized Logging Service platform. Currently, the following log sources are supported: Kubernetes, Docker, Linux Systemd, HTTP (JSON REST API), and Generic.
Logging Service Platform: The Logging Service platform stores the logs gathered from various sources in a centralized location for easy access. This data can be accessed for analysis and troubleshooting. The platform includes log search, routing, storage, analytics, and visualization features. For more information, see Log Collection.
Analytics and Visualization: Grafana, an analytics and visualization tool allows you to analyze the log data to identify patterns and trends and visualize the log data or generate reports. You can also use these reports to secure your log data from threats or troubleshoot underlying issues.
The illustration shows the default components of the Logging Service platform and the following is a brief description of the components:
Systems (1, 2, and n): These are the various log sources with Fluent Bit installed to gather, parse, and redirect logs to the Logging Service platform.
Logging Service Platform:
It contains an in-built log aggregator, Fluentd, easily compatible with various sources and targets. Hence, shipping logs from the source to the destination is hassle-free. It aggregates, processes, and ships data to the stipulated target.
Fluentd feeds logs to Loki, which stores and aggregates them before forwarding them to the visualization tool Grafana. Loki also works as an aggregation tool that indexes and groups log streams based on the labels.
The logs are displayed in the Grafana dashboard. You may generate reports, edit your dashboards according to your needs, and visualize the data accordingly.
It is essential to identify the origin of the logs before choosing the right approach to installing and configuring the agent. However, to provide convenient parsing and data labeling, IONOS accepts logs from the following four log sources: Kubernetes, Docker, Linux Systemd, and HTTP. The configuration of these log sources varies accordingly.
Note: Technically, you can send logs with Fluent Bit from any source if the following communication protocols are supported: TCP and HTTP. The only convenient parsing currently offered is using the specified log sources.
This method lets you collect and ship your Kubernetes application's logs. offers a wide range of information on how to set it up on your Kubernetes cluster. However, we also recommend you try our before configuring the log source.
If you have a set of applications on Docker, we recommend trying our . You can also find more information about Docker configuration on .
To set up Fluent Bit on a Linux system with systemd or journals, you must install an appropriate package for your Linux distribution. For more information about how to accomplish it, see . We also recommend you try our sample configuration.
You can also send logs through the HTTP REST endpoint. You can transfer logs that are in JSON format only via the HTTP REST endpoint.
The following is an example:
The Logging Service offers regional APIs that enable programmatic interaction with the platform. These APIs serve various purposes: task automation, system integration, and platform functionality extension. Additionally, the APIs allow you to filter logs based on different criteria, such as the date range, log level, and source.
A regional endpoint is necessary to interact with the Logging Service REST API endpoints. Currently, IONOS supports only the following endpoint for the Berlin region:
https://logging.de-txl.ionos.com/pipelines
To interact with the Logging Service REST API endpoints, the header must contain the following values:
Header | Required | Type | Description |
---|
Here are some of the most common API How-Tos for the Logging Service:
We recommend you refer to the following after creating an instance via the API:
Users need appropriate privileges to access, create, and modify Logging Service. Without necessary privileges, users have a read-only access and cannot provision changes. You can grant appropriate privileges via the User Manager.
To allow users to access, create, and modify Logging Service, follow these steps:
Log in to the with your username and password.
In the DCD menu, select Management > Users & Groups under Users.
Select the Groups tab in the User Manager window.
Select the appropriate group to assign relevant privileges.
In the Privileges tab, select the Access and manage Logging Service checkbox to allow the associated group members to access and manage Logging Service.
Note: You can remove the privileges from the group by clearing the Access and manage Logging Service checkbox.
Result: Appropriate privilege is granted to the group and the users in the respective group.
It is necessary to create an instance of the logging pipeline before sending log data to the Logging Service platform. For more information, see .
When sending a request to create a logging pipeline, you can specify a unique tag of your choice for each log source. For more information about the complete list of available sources, see .
This topic contains the following sections:
The following request creates an instance of a logging pipeline with two log streams: docker
and kubernetes
.
The following is a sample response. The values returned by each response differ based on the request.
Log sources like Kubernetes, Docker, and Linux Systemd collect and offer relevant labels. You can use these labels to analyze reports and query the dashboard. However, if you want to label additional fields from the log sources, you can define custom labels as follows when you create a pipeline:
The IONOS Logging Service provides a centralized and scalable solution for logging, monitoring, and analyzing your application and infrastructure logs. It offers a wide range of features to help you collect and analyze your logs from different log sources, monitor performance effectively, and gain insights into your system's behavior.
Note: Logging Service is:
currently available only through the API, without the DCD implementation.
available in the Berlin region and will soon expand to include more areas.
Get an overview of the Logging Service.
Set relevant privileges via the DCD.
Get started with Logging Service via the API.
You may notice that the pipeline's status is temporarily set to the PROVISIONING
state while provisioning is in process. A GET
request retrieves information about the pipeline and its status. For more information, see .
To get answers to the most commonly encountered questions about the Logging Service platform, see .
You can modify your logging pipeline by sending a PATCH
request with a specific pipeline ID.
Note: To modify a logging pipeline, you can use the same payload that you use in the POST
request for creating a logging pipeline. For more information, see Set Up a Logging Pipeline Instance.
The following is a sample request. Remember to replace the {pipelineID}
with a valid ID of the respective logging pipeline.
A key is necessary to send logs over the logging pipeline. For more information about the purpose of a pipeline key, see Log Security.
Warning:
IONOS adheres to a Share Once policy to generate a key; there is no alternative method to retrieve it if lost. Hence, we recommend that you secure the generated key.
The previous key is instantly revoked when you generate a new key for a specific pipeline.
To get a new key for a pipeline, you can use the following request. Remember to replace the {pipelineID}
with a valid ID of a pipeline to access its key
.
The response contains the key necessary to configure the Shared_Key
in Fluent Bit.
Each log stream in your pipeline is initially assigned a default data retention policy of 30 days. The logs for each log stream are retained for the specified number of days. However, you can define a custom retention policy for each log stream. The available options for retention periods include 7, 14, and 30 days.
Note: You can alternatively use the PATCH
request to update the retention policy of an existing pipeline.
The following is an example:
Authorization | yes | string |
Content-Type | yes | string | Set this to |
An overview of the product and its components. |
Information about its features and benefits. |
Information about various use cases of Logging Service. |
Information about log collection, log agent, and Fluent Bit configuration. |
Information about various log sources. |
Information about the log pipelines. |
Information about safety mechanisms for logs pushed from various sources. |
Learn how to set User privileges for Logging Service. |
Create an instance of a logging pipeline. |
Obtain a new key for a logging pipeline. |
Update an existing logging pipeline. |
Retrieve information about a specific logging pipeline. |
Delete a specific logging pipeline. |
Customize the retention policy for each log stream. |
Steps to send logs to the logging platform. |
Steps to access logs from the logging platform. |
To retrieve your logging pipeline information, you need the ID of the respective pipeline.
The following is a sample request. Remember to replace the {pipelineID}
with a valid ID of the specific pipeline whose information you want to access.
If your request is successful, you will receive the relevant information on a Ready
pipeline.
To access logs using Fluent Bit, you need the following information:
Note: A key is necessary to send logs through the newly formed logging pipeline. For more information about obtaining a key, see Obtain a New Key.
After creating a logging pipeline via the REST API, complete the logging pipeline configuration process using the following:
Prerequisites: To send logs to the logging platform, you must have the following:
A Ready
pipeline instance to obtain the tcpAddress
or httpAddress
.
A that you already obtained.
Fluent Bit log agent installed in your platform. For more information, see .
Based on your infrastructure—whether it uses Kubernetes, Docker, or Linux Systemd—you may follow different instructions to set up and install Fluent Bit. However, ensure that the Fluent Bit's output configuration contains the following:
A log server endpoint is either a tcpAddress
or a httpAddress
, based on your log source.
A Shared_Key
is required for authentication.
The Tag
that you defined while creating the pipeline.
Here is an example of a Fluent Bit configuration that needs an endpoint, tag, and a key:
Warning: The port must be set to 9000 if you are using a TCP protocol.
Fluent Bit can be configured to expose more verbose logs for troubleshooting purposes.
An observability platform is necessary for visualization and reporting purposes. IONOS uses to enable you to meet your visualization needs.
Note:
You can log in to your Grafana instance with your IONOS credentials.
Your Grafana configurations are associated with your contract number. If you have configured users with their unique email addresses, they can access Grafana only with those specific email addresses linked to the contract number. Users cannot log into Grafana using any other email address unrelated to the contract.
You can obtain your Grafana instance address by sending a GET
request to the server. For more information, see . The response contains the grafanaAddress
. This information remains the same for all your logging pipelines.
The Logging Service platform provides a centralized and scalable solution for collecting, monitoring, and analyzing logs from your application and infrastructure. It utilizes specialized components to aggregate, parse, and store the logs seamlessly.
The product contains enhancements that allow additional users under your contract number to create pipelines and log in to Grafana with their IONOS credentials. Additionally, you can also enable group members to access and manage Logging Service by providing specific access rights to the group. For more information, see .
The default HTTP rate limit is set to 50 requests per second.
Your data is secure and not shared with users using the same platform. The following defines the level of security provided by the platform:
Each pipeline uses its unique endpoint to send logs to the log server over the Transport Layer Security (TLS).
Each user has a separate partition because data is kept in different partitions.
Each chunk of data within the partition is secured by server-side and client-side encryptions.
A Bearer $TOKEN
is a string that is tied to your account. For information on generating tokens, see .
Field | Usage |
---|---|
For more information, see .
tcpAddress
Set the TCP log server address during Fluent Bit configuration.
httpAddress
Set the HTTP log server address during Fluent Bit configuration.
tag
Set the tag during Fluent Bit configuration. Remember to use the same tag you defined while creating the pipeline.