It is essential to identify the origin of the logs before choosing the right approach to installing and configuring the Fluent Bit agent. However, to provide convenient parsing and data labeling, IONOS accepts logs from the following four log sources: Kubernetes, Docker, Linux Systemd, and HTTP. The configuration of these log sources varies accordingly.
Note: Technically, you can send logs with Fluent Bit from any source if the following communication protocols are supported: TCP and HTTP. The only convenient parsing currently offered is using the specified log sources.
This method lets you collect and ship your Kubernetes application's logs. Fluent Bit offers a wide range of information on how to set it up on your Kubernetes cluster. However, we also recommend you try our Kubernetes configuration examples before configuring the log source.
If you have a set of applications on Docker, we recommend trying our Docker configuration examples. You can also find more information about Docker configuration on Fluent Bit's official website.
To set up Fluent Bit on a Linux system with systemd or journals, you must install an appropriate package for your Linux distribution. For more information about how to accomplish it, see Fluent Bit's official website. We also recommend you try our Linux systemd sample configuration.
You can also send logs through the HTTP REST endpoint. You can transfer logs that are in JSON format only via the HTTP REST endpoint.
The following is an example:
This section lists the key features and benefits of using the Logging Service.
Scalability: It is designed to handle extensive volumes of logs and scale up or down according to needs.
Availability: The logs are available 24/7 and can be accessed from anywhere using a web-based interface.
Security: The logs are secured using necessary encryption, access controls, and audit trails.
Customization: The service can be customized according to your needs. For example, defining log retention periods, setting up alerts, or creating custom dashboards.
Sub-user Management: The Logging Service allows the primary account owner to create sub-users and delegate pipeline management responsibilities.
Reduced Costs: It eliminates the need to invest in hardware and software for log management. Also, you only pay for the services utilized; thus, resulting in significant cost savings.
Increased Efficiency: It automates log management tasks such as log collection, storage, and analysis; thus, reducing the time and effort required to manage logs manually.
Improved Troubleshooting: It provides real-time access to log data, which facilitates speedy problem identification and resolution.
Compliance: It helps organizations meet compliance requirements by providing secure log storage and access controls.
Manage Log sources: You can create different pipelines for different environments or sources. It can also be more competitive as each pipeline instance has its own Key
for the Logging Server. Also, each pipeline can have multiple log sources differentiated by a tag
.
Auto Log Source Labeling: Seamless label synchronization is available for standard log sources, such as Docker, Kubernetes, and Linux Systemd. The log sources are labeled automatically and can be reported or analyzed in Grafana with the same labels. Labels are relative to each source. For example, namespace
for Kubernetes or container_id
for Docker. You can also define your custom labels while creating a pipeline without standard log sources.
Grafana Features: Grafana, a visualization platform, is rich in capabilities that make the service more advantageous from a competitive standpoint. The supported features include dashboards, alerting options, custom datastores, etc.
Data Encryption:
Server-Side Encryption (SSE): All the data are encrypted at the data store level using an S3 Server-Side Encryption.
Client-Side Encryption (CSE): Besides SSE, an additional CSE encrypts data before it is transferred to the data store. It is impossible to decrypt the data even if the SSE is bypassed to gain access to the data store.
Simple Client-Side Setup: The following are mandatory to set up a client-side: Endpoint
, Tag
, and a Shared_key
.
Sub-user Management: A sub-user is a user who has access to the Logging Service but is not an administrator or an owner.
Create Sub-users: The primary account owner can create sub-user accounts and assign permissions to them.
Limited Access: Sub-users can only view and manage the pipelines assigned to them by the primary account owner. They cannot access pipelines created by other sub-users or the primary account.
Primary Account Controls Sub-users: The primary account owner has complete administrative privileges to view, edit, delete, and manage all pipelines created by any sub-user under the account.
Better Access Control: Sub-user functionality allows larger teams and organizations to share access to the logging platform while limiting access to sensitive data. Primary account owners maintain oversight and control without getting overwhelmed.
Improved Delegation: Rather than broadly sharing keys and credentials, primary account owners can selectively grant access to sub-users for their portion of the logging pipeline. This partitioning facilitates wider use while enhancing security.
Maintaining log visibility is challenging in a distributed system with numerous services and environments. Logs are scattered across different nodes, making troubleshooting, performance monitoring, and security analysis complex tasks.
Logging Service offers a streamlined approach to centralizing logs. Easily configure agents on your log sources to collect and forward logs to a central repository. These logs are securely transferred, efficiently stored, and indexed for analysis. You can create custom dashboards for real-time log visualization and analysis, helping you quickly detect and address issues, maintain security standards, and optimize your application's performance.
Effectively managing log retention and storage is a critical operational challenge. Storing logs for an extended period can be costly and cumbersome, while inadequate retention policies may lead to losing valuable historical data needed for compliance, auditing, and troubleshooting.
Logging Service offers flexibility in log retention and storage. Configure retention policies to remove logs based on your organization's requirements automatically. It ensures you retain logs for as long as necessary without incurring high storage costs. Additionally, you can use the service to search and retrieve older logs when needed, simplifying compliance audits and historical analysis.
DevOps teams rely on real-time visibility into application logs during the development and deployment phases. Timely and continuous access to logs is essential for debugging, identifying issues, and ensuring smooth deployments.
Logging Service provides DevOps teams with near-real-time log monitoring capabilities. Integrate the service seamlessly into your development and deployment pipelines to capture and analyze logs as applications are built and deployed. As your application components interact and generate logs, Logging Service immediately ingests and makes them available for analysis. DevOps teams can set up alerts and notifications based on specific log events, ensuring rapid response to critical issues. This near-real-time log monitoring helps streamline development and deployment processes, reduces downtime, and ensures the successful release of reliable applications.
Managing access controls for logging data becomes exponentially complex for large, distributed teams. Trying to maintain security while enabling broad log access leads to increased risk, compliance issues, and operational bottlenecks.
Logging Service provides a robust sub-user management system to simplify access control across large teams. The primary account owner can create sub-users and selectively grant access to certain pipelines, sources, and log data views.
Note: A sub-user is a user who has access to the Logging Service but is not an administrator or an owner.
Sub-users can only view the log data they have been explicitly granted permission to access. This compartmentalized access control delivers several advantages:
Primary account owners maintain complete ownership and oversight of the total logging pipeline management.
Sub-users are granted appropriate access without undesired privileges or access to sensitive information.
Compliance and auditing functions have clear boundaries around access.
Organizational changes can be accommodated by adjusting sub-user permissions.
Logging Service is a cloud-based service that allows you to store, manage, and analyze logs generated by your applications or systems. It is a scalable and cost-effective solution for organizations that need to manage and analyze large volumes of logs.
Logging Service is also an efficient log management solution, which contains several important features and benefits that can help organizations improve their log management practices and troubleshoot issues more quickly.
Log data is the key to an early detection of security incidents; thus, using the Logging Service can improve your operational efficiency, enhance your security, and increase your visibility into your system's performance and errors. Log data is available 24/7, secure, high-performance, and scalable. Your data is encrypted before, during, and after it is transmitted. The Fluent Bit agent gathers log data locally, while the Logging Service forms the basis for analysis and better visualization of your logs with increased security.
The managed Grafana in the Logging Service comes with a pre-configured datasource for the Telemetry API called IONOS Telemetry. You can use this datasource to query metrics from the IONOS Cloud Telemetry API. For more information, see Integration with IONOS Telemetry API.
Logging Service also enables you to configure an unlimited log retention period for your logs. For more information, see Modify the Log Retention Policy.
The architecture of the Logging Service includes the following three main components that can be used to aggregate logs from various sources, analyze the gathered logs, and create visualizations for monitoring and report generation.
Data Collection: Data is collected from various log sources, such as applications, servers, and network devices, which are sent to a centralized Logging Service platform. Currently, the following log sources are supported: Kubernetes, Docker, Linux Systemd, HTTP (JSON REST API), and Generic.
Logging Service Platform: The Logging Service platform stores the logs gathered from various sources in a centralized location for easy access. This data can be accessed for analysis and troubleshooting. The platform includes log search, routing, storage, analytics, and visualization features. For more information, see Log Collection.
Analytics and Visualization: Grafana, an analytics and visualization tool allows you to analyze the log data to identify patterns and trends and visualize the log data or generate reports. You can also use these reports to secure your log data from threats or troubleshoot underlying issues.
The illustration shows the default components of the Logging Service platform and the following is a brief description of the components:
Systems (1, 2, and n): These are the various log sources with Fluent Bit installed to gather, parse, and redirect logs to the Logging Service platform.
Logging Service Platform:
It contains an in-built log aggregator, Fluentd, easily compatible with various sources and targets. Hence, shipping logs from the source to the destination is hassle-free. It aggregates, processes, and ships data to the stipulated target.
Fluentd feeds logs to Loki, which stores and aggregates them before forwarding them to the visualization tool Grafana. Loki also works as an aggregation tool that indexes and groups log streams based on the labels.
The logs are displayed in the Grafana dashboard. You may generate reports, edit your dashboards according to your needs, and visualize the data accordingly.
The following are the key limitations of Logging Service:
A log pipeline refers to an instance or configuration of the Logging Service you can create using the REST API. To create an instance of the Logging Service, you can request the designated regional endpoint based on your desired location:
Berlin: https://logging.de-txl.ionos.com/pipelines
Frankfurt: https://logging.de-fra.ionos.com/pipelines
London: https://logging.gb-lhr.ionos.com/pipelines
Paris: https://logging.fr-par.ionos.com/pipelines
Logroño: https://logging.es-vit.ionos.com/pipelines
When creating a log pipeline instance, you can define multiple log streams within each pipeline. Each stream functions as a separate log source, allowing you to organize and manage different sources of logs within your logging system.
To differentiate the log sources and enable effective reporting, it is necessary to provide a unique tag for each log source within the pipeline instance. The tag serves as an identifier or label for the log source, allowing you to distinguish and track the logs from different sources easily.
After the pipeline is set up, a unique endpoint is assigned to each pipeline, thus establishing a connection, either HTTP or TCP, with an independent log server. This endpoint serves as the designated destination for sending logs generated by all the log sources within the pipeline. However, to ensure proper categorization and differentiation, each pipeline configuration log source must utilize its designated tag. By adhering to this practice, the logs generated by each source can be accurately identified and traced, even when they are directed to the same endpoint.
The Logging Service platform imposes specific limitations on the number of pipelines you can create and the log rate you can send to the log server.
These limitations are determined by your pricing plan and are designed to ensure that all users receive optimal performance and resource allocation.
By default, the platform sets an ingestion rate limit of 50 HTTP requests per second for each pipeline to prevent overloading the log server with excessive log data.
The Logging Service is a versatile and accessible platform that allows you to conveniently store and manage logs from various sources. This platform offers a seamless log aggregation solution for logs generated within the IONOS infrastructure, by your bare metal system, or another cloud environment. With its flexibility, you can effortlessly push logs from anywhere, ensuring comprehensive log monitoring and analysis.
The following two encryption mechanisms safeguard all HTTP or TCP communications that push logs:
TLS (Transport Layer Security)
KEY
If using HTTP, then the APIKEY
must be specified in the header.
If using TCP, specify the Shared_Key
.
The key brings an extra layer of security with which you can revoke or regenerate the existing key.
When using TCP or TLS, you must enable tls
and provide a Shared_Key
in the Fluent Bit configuration. The key can be obtained via our REST API. For more information, see .
Note: To view a complete list of parameters, see .
If using HTTP (JSON), provide the key in the header as shown in the following example:
This is an equivalent example of configuring Fluent Bit with HTTP outbound:
A centralized Logging Service platform consists of two major components: Log Collection and Log Aggregation. The responsibilities of the platform provider and the user differ in the context of Logging Service.
Log Collection: The responsibility for log collection and its configuration lies with the user. This involves setting up mechanisms to gather log data from various sources within the infrastructure and applications. These mechanisms can include agents, log shippers, or APIs that send log data to a central location for storage and analysis.
Log Aggregation: The Logging Service platform provider provides and manages the log aggregation component. This component involves the centralization of log data from multiple sources, making it accessible for analysis and visualization. The platform handles log storage, indexing, and search functionalities.
Logs must be targeted and collected to be sent to the Logging Service platform for aggregation and analysis. Log agents responsible for collecting and forwarding logs to the central logging platform typically facilitate this process.
While various log agents are available, the Logging Service platform supports the . Fluent Bit is a lightweight and efficient log forwarder that can be installed on Linux, macOS, and Windows systems. For more information, see . It provides the necessary functionality to collect logs from different sources and push them to the Logging Service platform for further processing and analysis.
Note:
Fluent Bit installation and configuration vary based on your .
Ensure you follow the instructions provided by the Logging Service platform provider and refer to any additional documentation or guidelines they may offer for integrating Fluent Bit log agent into your logging infrastructure.
To ensure that the logs are shipped correctly and securely, ensure that you configure the following appropriately in Fluent Bit:
Log Server Endpoint: It refers to the address of your logging pipeline, where the logs will be sent after they are collected. You can obtain this endpoint from the REST API response.
Tag: To ensure an appropriate synchronization between the agent and the log server, configure a tag in the Fluent Bit log agent. It can be utilized for reporting purposes and aids in identifying and categorizing the logs.
Key: In addition to the TLS connection, Fluent Bit needs a Shared_Key
configuration for authentication purposes. This key ensures that only authorized logs are sent to the logging pipeline. You can obtain a token via the REST API.
Here is an example of a Fluent Bit configuration that needs an endpoint
, a tag
, and a key
:
Note: The user must perform any data masking or sanitization.
Note: To view a complete list of parameters, see .
Aspect
Description
Limit
Service Access
Means of creating and managing pipelines.
REST API only
HTTP Rate Limit
Default rate limit for HTTP requests per pipeline during log ingestion.
50 requests/second
TCP Bandwidth
Default TCP bandwidth limit per pipeline, approximately in terms of logs per second.
~10,000 logs/second
Maximum Pipelines
The maximum number of pipelines allowed per contract.
5 pipelines
Log Streams per Pipeline
The maximum number of log streams allowed per pipeline.
10 log streams/pipeline