Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Depending on the origins of the logs, it may be necessary to adopt a different approach for the installation and configuration of FluentBit.
However, to provide convenient parsing and data labeling, we accept 4 different log sources, which need a different setup and configuration.
You may need this method to collect and ship your Kubernetes application's logs. FluentBit offers a wide range of information on how to set up FluentBit on your Kubernetes cluster. However, you are welcome to try our Example Kubernetes Configuration.
If you have a set of applications on Docker, we recommend using our Example Docker configuration. However, you can find more information about Docker configuration on FluentBit's official website.
To set up FluentBit in a Linux system with systems, journald you need to install an appropriate package for your distribution. Instructions on doing so can be found on the FluentBit's official website. You can find a sample configuration in our Example Section.
You can even send logs in a JSON format through our HTTP REST endpoint. Example:
Note: Technically, you can send logs with FluentBit from any sources as long as the communication is supported: TCP and HTTP(JSON). However, at the moment, we provide convenient parsing with the above log sources.
The IONOS Logging Service is a cloud-based service that allows you to store, manage, and analyze logs generated by your applications or systems. It is a scalable and cost-effective solution for organizations that need to manage and analyze large volumes of logs.
Logging Service is also an efficient log management solution. It has a number of features and benefits that can help organizations improve their log management practices and troubleshoot issues more quickly.
The architecture of the Logging Service includes three main components:
Data Collection: Data is collected from various sources such as applications, servers, and network devices. This data is then sent to a centralized logging platform.
Logging Platform: The logging platform stores the logs in a centralized location, which you can access for analysis and troubleshooting. The platform includes log search, routing, storage, analytics, and visualization features.
Analytics and Visualization: Analytics and visualization tools allow you to analyze and visualize the log data. This helps you identify patterns and trends and troubleshoot issues.
Logging Service offers the following key features:
Scalability: The service can handle large volumes of logs and scale up or down per the user's needs.
Availability: The service is available 24/7, and logs can be accessed from anywhere using a web-based interface.
Security: The service ensures the security of logs by providing encryption, access controls, and audit trails.
Customization: Customize the service per your needs, such as defining log retention periods, setting up alerts, and creating custom dashboards.
Logging Service offers the following benefits:
Reduced Costs: Logging Service eliminates the need to invest in hardware and software for log management. You only pay for the services they use, which can result in significant cost savings.
Increased Efficiency: Logging Service automates log management tasks such as log collection, storage, and analysis. This reduces the time and effort required by you to manage logs manually.
Improved Troubleshooting: Logging Service provides you with real-time access to log data, which helps them identify and troubleshoot issues quickly.
Compliance: Logging Service can help organizations meet compliance requirements by providing secure log storage and access controls.
Berlin
Based on the chosen plan, there are some limitations:
Rate and Bandwidth limitations:
Default HTTP rate limit: 50 requests per seconds.
Default TCP bandwidth: approximately 15,000 logs per seconds.
Maximum 10 pipelines.
Maximum 3 log streams per pipeline.
User must be the contract owner.
Kubernetes
Docker
Systemd
HTTP (JSON REST API)
Generic
The Logging Service provides a centralized and scalable solution for logging, monitoring, and analyzing your application and infrastructure logs. It offers a wide range of features to help you monitor and analyze your logs effectively and gain insights into your system's behavior. Using the Logging Service can improve your operational efficiency, enhance your security, and increase your visibility into your system's performance and errors.
Note: Logging Service is currently in the Early Access (EA) phase. We recommend keeping usage and testing to non-production critical applications as there might be possibilities of data loss during the EA phase. For more information, please contact your sales representative or customer support.
Note: Logging Service is currently available only through the API, without DCD implementation.
A pipeline in the context of IONOS Logging Service refers to an instance or configuration of the logging service you can create. This instance is created using the REST API provided by IONOS. To create an instance of Logging Service, you can request the designated endpoint, such as https://logging.de-txl.ionos.com/pipelines
(this is an example endpoint, the actual endpoint may vary).
Within each pipeline instance, it is possible to define multiple log streams, where each stream functions as a separate log source. These log streams allow you to organize and manage different sources of logs within your logging system.
To differentiate the log sources and enable effective reporting, it is necessary to provide a unique tag for each log source within the pipeline instance. The tag serves as an identifier or label for the log source, allowing you to distinguish and track the logs from different sources easily.
At the conclusion of the pipeline setup, a unique endpoint (HTTP/TCP) will be assigned to each pipeline, establishing a connection to an independent log server. This endpoint serves as the designated destination for sending logs generated by all the log sources within the pipeline. However, to ensure proper categorization and differentiation, each pipeline configuration log source must utilize its designated tag. By adhering to this practice, the logs generated by each source can be accurately identified and traced, even when they are directed to the same endpoint.
Note: Based on your pricing model, a specific limit is tied to each pipeline, which limits the log rate you can send to the log server.
The Logging Service offers regional APIs that enable programmatic interaction with the platform. These APIs serve various purposes: task automation, system integration, and platform functionality extension. Additionally, the APIs allow you to filter logs based on different criteria, such as date range, log level, and source.
You need to use a regional endpoint to interact with the Logging Service REST API endpoints. Available endpoints are:
Berlin: https://logging.de-txl.ionos.com/pipelines
To interact with the Logging Service REST API endpoints, you need the following header values:
Header | Required | Type | Description |
---|
To generate a token, follow the instruction on .
Here are some of the most common API How-Tos for the Logging Service:
Note: Only contract-owner users are authorized to create a Logging Service instance.
Once the instance is crafted via API, you need to follow the following How-Tos:
The IONOS Logging Service is a versatile and accessible platform that allows you to conveniently store and manage logs from various sources. Whether it's logs generated within the IONOS infrastructure, your own bare metal system, or even from another cloud environment, this platform provides a seamless solution for log aggregation. With its flexibility, you can effortlessly push logs from anywhere, ensuring comprehensive log monitoring and analysis.
All types of communications to push logs (HTTP and TCP) are protected by the following two mechanisms:
TLS (Transport Layer Security)
KEY
HTTP: APIKEY Header
TCP: SharedKey
The Key brings an extra layer of security with which you can revoke or regenerate the existing key.
When using TCP/TLS, you have to enable tls
and provide a Shared_key
(key) in the FluentBit configuration. The key can be obtained via our REST API (ShareOnce Policy)
Refer to to see the whole parameters.
If you are planning to use HTTP (JSON), you should provide the key in the header alongside your account email address:
This is an equivalent example of configuring FluentBit with HTTP outbound:
A centralized logging platform consists of two major components: Log Collection and Log Aggregation. In the context of the Logging Service, it's important to clarify the responsibilities of the platform provider and the user.
Log Collection: The responsibility for log collection and its configuration lies with the user. This involves setting up mechanisms to gather log data from various sources within the infrastructure and applications. These mechanisms can include agents, log shippers, or APIs that send log data to a central location for storage and analysis.
Log Aggregation: The Logging Service platform provider provides and manages the log aggregation component. This component involves the centralization of log data from multiple sources, making it accessible for analysis and visualization. The platform handles log storage, indexing, and search functionalities.
Logs must be targeted and collected to be sent to a Logging Service platform for aggregation and analysis. Log agents responsible for collecting and forwarding logs to the central logging platform typically facilitate this process.
While various log agents are available, it's mentioned that the supported log agent for the Logging Service platform in question is . FluentBit is a lightweight and efficient log forwarder that can be installed on Linux, macOS, and Windows systems. It provides the necessary functionality to collect logs from different sources and push them to the Logging Service platform for further processing and analysis.
FluentBit can be installed on Linux, macOS, and Windows. For more information, see .
Note that FluentBit installation and configuration is depending on your .
Ensure you follow the instructions provided by the Logging Service platform provider and refer to any additional documentation or guidelines they may offer for integrating FluentBit Log Agent into your logging infrastructure.
When configuring FluentBit for log shipping, certain pieces of information need to be properly configured to ensure the logs are shipped correctly and securely.
The Log Server Endpoint refers to the address of your logging pipeline, where the logs will be sent after they are collected. This endpoint can be obtained from the REST API response.
The Tag is a piece of information that must be configured in the log agent (FluentBit) to ensure synchronization between the agent and the log server. It helps identify and categorize the logs and can also be used for reporting purposes.
In addition to the TLS connection, FluentBit needs to be configured with a Key (SharedKey) for authentication purposes. This key ensures that only authorized logs are sent to the logging pipeline. The token can be obtained via our REST API.
Here is an example of a FluentBit configuration that needs an Endpoint, Tag, and Key:
Note: Any data masking or sanitization must happen on the client's side.
Refer to to see the whole parameters.
You can modify your logging pipeline by sending a PATCH request with a specific pipeline's ID, for example:
You can modify a pipeline with the same payload you use in the POST request.
In order to send logs to the logging platform, there are a few requirements you need to have:
A Ready
pipeline instance so that you can obtain tcpAddress
or httpAddress
.
A key that you already obtained.
FluentBit log agent installed in your platform (see more).
Based on your infrastructure, whether it is kubernetes
, docker
, or systemd
, you may follow different instructions to set up and install FluentBit. However, there are a few pieces of information that you always need to configure FluentBit's output section:
The log server endpoint, which is tcpAddress
or httpAddress
based on your log source.
The Key that is needed for the authentication.
The Tag that you defined while creating the pipeline.
Here is an example of a FluentBit configuration that needs an Endpoint, Tag, and Key:
Note: The port is always 9000 if the protocol is TCP.
FluentBit can be configured to expose more verbose logs for troubleshooting purposes.
For more information, see FluentBit global configuration.
Authorization | yes | string | A Bearer Token. A string token that is tied to your account |
Content-Type | yes | string | Set this to |
After following the necessary steps to create a logging pipeline via the REST API, there are two main steps that need to be made to complete the whole logging pipeline process:
The purpose of the pipeline key is described in the . In this section, we explain how to obtain a key for a pipeline.
To get a new key for a pipeline, you can use the following request.
In our example case, b849d5dd-cabe-4c13-b04d-a11bcdee721b
is the pipeline's ID.
The response contains the key you can configure in the FluentBit section for Shared_Key.
Note: We have a Share Once policy in generating a key, meaning there is no other way to retrieve the key. Once it's generated, please keep it somewhere safe.
Note: Once a new key is generated, all the previous keys will be revoked immediately.
Coming Soon
Each log stream in your pipeline is initially assigned a default data retention policy. However, you have the flexibility to define a custom retention policy for each log stream. The available options for retention periods include 7, 14, and 30 days.
By configuring the appropriate retention policy, you can determine how long the logs will be retained for each stream. The default retention period is set to 30 days.
However, within the log stream configuration, you have the flexibility to define a retention policy that aligns with your specific requirements. This allows you to determine how long the log data will be retained for a particular log stream.
You can use the following example:
You can also use the PATCH request to update the retention policy of an existing pipeline.
Eventually, you need to be able to use a platform for visualization and report purposes. We are using Grafana to enable you to meet your visualization needs.
To obtain your Grafana instance address, you can send a GET request for a pipeline. In the response, you can find grafanaAddress
. This information is available, and the same for all your pipelines.
Note: You can log in to your Grafana instance with your IONOS credentials.
To create a logging pipeline instance, you must meet the following terms:
You must have a valid and billable contract at IONOS.
You must have enough permission to manage the Data Center.
You must be the contract owner to be able to interact with the REST API.
The following request creates a logging pipeline with 2 log streams.
This request will eventually create an instance of a logging pipeline with 2 log streams: Docker and Kubernetes. As you can see, each source must use a unique tag of your choice.
To see the full list of available sources, please refer to our log sources section.
The following is just a sample of the response; your values will be different.
The pipeline will remain in the NotReady
status for a short amount of time till its provisioning is finished. You can get the pipeline information and its status by the following request.
log sources like kubernetes
, docker
, and systemd
collect and provide the relevant labels, which can be used in report analysis and dashboard queries.
However, you might need to label a few more fields from the log sources additionally. You can define the additional labels as follows when you create a pipeline:
To get your pipeline information, you can use the following request.
On a successful request, you will receive important information on a Ready
pipeline.
You need the following pieces of information to configure the FluentBit and access the logs:
You need to obtain one last piece of information to send logs, the Key. Follow the instruction on how to obtain a key.
Field | Usage |
---|---|
tcpAddress
This is the address you need to set in the FluentBit configuration for the TCP log server.
httpAddress
This is the address you need to set in the FluentBit configuration for the HTTP log server.
grafanaAddress
This is the address where you can login with your IONOS credentials to access the logs.
tag
This is the same tag you defined while creating the pipeline, which needs to be set in the FluentBit configuration.