Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
A centralized Logging Service platform consists of two major components: Log Collection and Log Aggregation. The responsibilities of the platform provider and the user differ in the context of Logging Service.
Log Collection: The responsibility for log collection and its configuration lies with the user. This involves setting up mechanisms to gather log data from various sources within the infrastructure and applications. These mechanisms can include agents, log shippers, or APIs that send log data to a central location for storage and analysis.
Log Aggregation: The Logging Service platform provider provides and manages the log aggregation component. This component involves the centralization of log data from multiple sources, making it accessible for analysis and visualization. The platform handles log storage, indexing, and search functionalities.
Logs must be targeted and collected to be sent to the Logging Service platform for aggregation and analysis. Log agents responsible for collecting and forwarding logs to the central logging platform typically facilitate this process.
While various log agents are available, the Logging Service platform supports the Fluent Bit Log Agent. Fluent Bit is a lightweight and efficient log forwarder that can be installed on Linux, macOS, and Windows systems. For more information, see Fluent Bit's official website. It provides the necessary functionality to collect logs from different sources and push them to the Logging Service platform for further processing and analysis.
Note:
Fluent Bit installation and configuration vary based on your Log Sources.
Ensure you follow the instructions provided by the Logging Service platform provider and refer to any additional documentation or guidelines they may offer for integrating Fluent Bit log agent into your logging infrastructure.
To ensure that the logs are shipped correctly and securely, ensure that you configure the following appropriately in Fluent Bit:
Log Server Endpoint: It refers to the address of your logging pipeline, where the logs will be sent after they are collected. You can obtain this endpoint from the REST API response.
Tag: To ensure an appropriate synchronization between the agent and the log server, configure a tag in the Fluent Bit log agent. It can be utilized for reporting purposes and aids in identifying and categorizing the logs.
Key: In addition to the TLS connection, Fluent Bit needs a Shared_Key
configuration for authentication purposes. This key ensures that only authorized logs are sent to the logging pipeline. You can obtain a token via the REST API.
Here is an example of a Fluent Bit configuration that needs an endpoint
, a tag
, and a key
:
Note: The user must perform any data masking or sanitization.
This section lists the key features and benefits of using the Logging Service.
Scalability: It is designed to handle extensive volumes of logs and scale up or down according to needs.
Availability: The logs are available 24/7 and can be accessed from anywhere using a web-based interface.
Security: The logs are secured using necessary encryption, access controls, and audit trails.
Customization: The service can be customized according to your needs. For example, defining log retention periods, setting up alerts, or creating custom dashboards.
Sub-user Management: The Logging Service allows the primary account owner to create sub-users and delegate pipeline management responsibilities.
Reduced Costs: It eliminates the need to invest in hardware and software for log management. Also, you only pay for the services utilized; thus, resulting in significant cost savings.
Increased Efficiency: It automates log management tasks such as log collection, storage, and analysis; thus, reducing the time and effort required to manage logs manually.
Improved Troubleshooting: It provides real-time access to log data, which facilitates speedy problem identification and resolution.
Compliance: It helps organizations meet compliance requirements by providing secure log storage and access controls.
Manage Log sources: You can create different pipelines for different environments or sources. It can also be more competitive as each pipeline instance has its own Key
for the Logging Server. Also, each pipeline can have multiple log sources differentiated by a tag
.
Auto Log Source Labeling: Seamless label synchronization is available for standard log sources, such as Docker, Kubernetes, and Linux Systemd. The log sources are labeled automatically and can be reported or analyzed in Grafana with the same labels. Labels are relative to each source. For example, namespace
for Kubernetes or container_id
for Docker. You can also define your custom labels while creating a pipeline without standard log sources.
Grafana Features: Grafana, a visualization platform, is rich in capabilities that make the service more advantageous from a competitive standpoint. The supported features include dashboards, alerting options, custom datastores, etc.
Data Encryption:
Server-Side Encryption (SSE): All the data are encrypted at the data store level using an S3 Server-Side Encryption.
Client-Side Encryption (CSE): Besides SSE, an additional CSE encrypts data before it is transferred to the data store. It is impossible to decrypt the data even if the SSE is bypassed to gain access to the data store.
Simple Client-Side Setup: The following are mandatory to set up a client-side: Endpoint
, Tag
, and a Shared_key
.
Sub-user Management: A sub-user is a user who has access to the Logging Service but is not an administrator or an owner.
Create Sub-users: The primary account owner can create sub-user accounts and assign permissions to them.
Limited Access: Sub-users can only view and manage the pipelines assigned to them by the primary account owner. They cannot access pipelines created by other sub-users or the primary account.
Primary Account Controls Sub-users: The primary account owner has complete administrative privileges to view, edit, delete, and manage all pipelines created by any sub-user under the account.
Better Access Control: Sub-user functionality allows larger teams and organizations to share access to the logging platform while limiting access to sensitive data. Primary account owners maintain oversight and control without getting overwhelmed.
Improved Delegation: Rather than broadly sharing keys and credentials, primary account owners can selectively grant access to sub-users for their portion of the logging pipeline. This partitioning facilitates wider use while enhancing security.
It is essential to identify the origin of the logs before choosing the right approach to installing and configuring the Fluent Bit agent. However, to provide convenient parsing and data labeling, IONOS accepts logs from the following four log sources: Kubernetes, Docker, Linux Systemd, and HTTP. The configuration of these log sources varies accordingly.
Note: Technically, you can send logs with Fluent Bit from any source if the following communication protocols are supported: TCP and HTTP. The only convenient parsing currently offered is using the specified log sources.
This method lets you collect and ship your Kubernetes application's logs. Fluent Bit offers a wide range of information on how to set it up on your Kubernetes cluster. However, we also recommend you try our Kubernetes configuration examples before configuring the log source.
If you have a set of applications on Docker, we recommend trying our Docker configuration examples. You can also find more information about Docker configuration on Fluent Bit's official website.
To set up Fluent Bit on a Linux system with systemd or journals, you must install an appropriate package for your Linux distribution. For more information about how to accomplish it, see Fluent Bit's official website. We also recommend you try our Linux systemd sample configuration.
You can also send logs through the HTTP REST endpoint. You can transfer logs that are in JSON format only via the HTTP REST endpoint.
The following is an example:
Maintaining log visibility is challenging in a distributed system with numerous services and environments. Logs are scattered across different nodes, making troubleshooting, performance monitoring, and security analysis complex tasks.
Logging Service offers a streamlined approach to centralizing logs. Easily configure agents on your log sources to collect and forward logs to a central repository. These logs are securely transferred, efficiently stored, and indexed for analysis. You can create custom dashboards for real-time log visualization and analysis, helping you quickly detect and address issues, maintain security standards, and optimize your application's performance.
Effectively managing log retention and storage is a critical operational challenge. Storing logs for an extended period can be costly and cumbersome, while inadequate retention policies may lead to losing valuable historical data needed for compliance, auditing, and troubleshooting.
Logging Service offers flexibility in log retention and storage. Configure retention policies to remove logs based on your organization's requirements automatically. It ensures you retain logs for as long as necessary without incurring high storage costs. Additionally, you can use the service to search and retrieve older logs when needed, simplifying compliance audits and historical analysis.
DevOps teams rely on real-time visibility into application logs during the development and deployment phases. Timely and continuous access to logs is essential for debugging, identifying issues, and ensuring smooth deployments.
Logging Service provides DevOps teams with near-real-time log monitoring capabilities. Integrate the service seamlessly into your development and deployment pipelines to capture and analyze logs as applications are built and deployed. As your application components interact and generate logs, Logging Service immediately ingests and makes them available for analysis. DevOps teams can set up alerts and notifications based on specific log events, ensuring rapid response to critical issues. This near-real-time log monitoring helps streamline development and deployment processes, reduces downtime, and ensures the successful release of reliable applications.
Managing access controls for logging data becomes exponentially complex for large, distributed teams. Trying to maintain security while enabling broad log access leads to increased risk, compliance issues, and operational bottlenecks.
Logging Service provides a robust sub-user management system to simplify access control across large teams. The primary account owner can create sub-users and selectively grant access to certain pipelines, sources, and log data views.
Note: A sub-user is a user who has access to the Logging Service but is not an administrator or an owner.
Sub-users can only view the log data they have been explicitly granted permission to access. This compartmentalized access control delivers several advantages:
Primary account owners maintain complete ownership and oversight of the total logging pipeline management.
Sub-users are granted appropriate access without undesired privileges or access to sensitive information.
Compliance and auditing functions have clear boundaries around access.
Organizational changes can be accommodated by adjusting sub-user permissions.
The Logging Service is a versatile and accessible platform that allows you to conveniently store and manage logs from various sources. This platform offers a seamless log aggregation solution for logs generated within the IONOS infrastructure, by your bare metal system, or another cloud environment. With its flexibility, you can effortlessly push logs from anywhere, ensuring comprehensive log monitoring and analysis.
The following two encryption mechanisms safeguard all HTTP or TCP communications that push logs:
TLS (Transport Layer Security)
KEY
If using HTTP, then the APIKEY
must be specified in the header.
If using TCP, specify the Shared_Key
.
The key brings an extra layer of security with which you can revoke or regenerate the existing key.
When using TCP or TLS, you must enable tls
and provide a Shared_Key
in the Fluent Bit configuration. The key can be obtained via our REST API. For more information, see Obtain a new Key.
Note: To view a complete list of parameters, see Fluent Bit's official website.
If using HTTP (JSON), provide the key in the header as shown in the following example:
This is an equivalent example of configuring Fluent Bit with HTTP outbound:
Note: To view a complete list of parameters, see Fluent Bit's official website.
After creating a logging pipeline via the REST API, complete the logging pipeline configuration process using the following:
A log pipeline refers to an instance or configuration of the Logging Service you can create using the REST API. To create an instance of the Logging Service, you can request the designated regional endpoint based on your desired location:
Berlin: https://logging.de-txl.ionos.com/pipelines
Frankfurt: https://logging.de-fra.ionos.com/pipelines
London: https://logging.gb-lhr.ionos.com/pipelines
Paris: https://logging.fr-par.ionos.com/pipelines
Logroño: https://logging.es-vit.ionos.com/pipelines
When creating a log pipeline instance, you can define multiple log streams within each pipeline. Each stream functions as a separate log source, allowing you to organize and manage different sources of logs within your logging system.
To differentiate the log sources and enable effective reporting, it is necessary to provide a unique tag for each log source within the pipeline instance. The tag serves as an identifier or label for the log source, allowing you to distinguish and track the logs from different sources easily.
After the pipeline is set up, a unique endpoint is assigned to each pipeline, thus establishing a connection, either HTTP or TCP, with an independent log server. This endpoint serves as the designated destination for sending logs generated by all the log sources within the pipeline. However, to ensure proper categorization and differentiation, each pipeline configuration log source must utilize its designated tag. By adhering to this practice, the logs generated by each source can be accurately identified and traced, even when they are directed to the same endpoint.
The Logging Service platform imposes specific limitations on the number of pipelines you can create and the log rate you can send to the log server.
These limitations are determined by your pricing plan and are designed to ensure that all users receive optimal performance and resource allocation.
By default, the platform sets an ingestion rate limit of 50 HTTP requests per second for each pipeline to prevent overloading the log server with excessive log data.
The Logging Service offers regional APIs that enable programmatic interaction with the platform. These APIs serve various purposes: task automation, system integration, and platform functionality extension. Additionally, the APIs allow you to filter logs based on different criteria, such as the date range, log level, and source.
A sub-user is a user who has access to the Logging Service but is not an administrator or an owner. IONOS's crucial access control restriction does not allow sub-users to view or modify pipelines belonging to other sub-user accounts, except the primary administrator, who retains full cross-pipeline privileges. Ensure that the sub-user pipeline ownership and access permissions align with your organizational needs.
If a sub-user account creates a pipeline, access is restricted only to that sub-user and the primary administrator. Other sub-users cannot access or perform CRUD operations on the respective pipeline. For example, if sub-user A creates Pipeline 1, only sub-user A and the primary administrator account can view, edit, delete, or manage Pipeline 1. No other sub-user accounts will have access to it.
A regional endpoint is necessary to interact with the Logging Service REST API endpoints. Currently, IONOS supports only the following regions for the Logging Service:
Berlin: https://logging.de-txl.ionos.com/pipelines
Frankfurt: https://logging.de-fra.ionos.com/pipelines
London: https://logging.gb-lhr.ionos.com/pipelines
Paris: https://logging.fr-par.ionos.com/pipelines
Logroño: https://logging.es-vit.ionos.com/pipelines
Note: We recommend using the authorized IP addresses associated with each endpoint if you need to configure firewall rules to restrict traffic sent to the Logging Service endpoints. It enables you to configure rules accordingly to ensure traffic is redirected only through authorized IP addresses for each endpoint. For more information about the authorized IP addresses, see .
To interact with the Logging Service REST API endpoints, the header must contain the following values:
Here are some of the most common API How-Tos for the Logging Service:
We recommend you refer to the following after creating an instance via the API:
Prerequisites: To send logs to the logging platform, you must have the following:
A Ready
pipeline instance to obtain the tcpAddress
or httpAddress
.
A that you already obtained.
Fluent Bit log agent installed in your platform. For more information, see .
Based on your infrastructure—whether it uses Kubernetes, Docker, or Linux Systemd—you may follow different instructions to set up and install Fluent Bit. However, ensure that the Fluent Bit's output configuration contains the following:
A log server endpoint is either a tcpAddress
or a httpAddress
, based on your log source.
A Shared_Key
is required for authentication.
The Tag
that you defined while creating the pipeline.
Here is an example of a Fluent Bit configuration that needs an endpoint, tag, and a key:
Warning: The port must be set to 9000 if you are using a TCP protocol.
Fluent Bit can be configured to expose more verbose logs for troubleshooting purposes.
It is necessary to create an instance of the logging pipeline before sending log data to the Logging Service platform. For more information, see .
For more information about the complete list of available sources, see .
This topic contains the following sections:
The following request creates an instance of a logging pipeline with two log streams: docker
and kubernetes
.
Warning:
IONOS supports unique email addresses across all contracts in each region.
The following is a sample response. The values returned by each response differ based on the request.
Log sources like Kubernetes, Docker, and Linux Systemd collect and offer relevant labels. You can use these labels to analyze reports and query the dashboard. However, if you want to label additional fields from the log sources, you can define custom labels as follows when you create a pipeline:
A key is necessary to send logs over the logging pipeline. For more information about the purpose of a pipeline key, see .
Warning:
IONOS adheres to a Share Once policy to generate a key; there is no alternative method to retrieve it if lost. Hence, we recommend that you secure the generated key.
The previous key is instantly revoked when you generate a new key for a specific pipeline.
To get a new key for a pipeline, you can use the following request. Remember to replace the {pipelineID}
with a valid ID of a pipeline to access its key
.
The response contains the key necessary to configure the Shared_Key
in Fluent Bit.
Users need appropriate privileges to access, create, and modify Logging Service. Without necessary privileges, users have a read-only access and cannot provision changes. You can grant appropriate privileges via the User Manager.
To allow users to access, create, and modify the Logging Service, follow these steps:
Log in to the with your username and password.
In the DCD menu, select Management > Users & Groups under Users.
Select the Groups tab in the User Manager window.
Select the appropriate group to assign relevant privileges.
In the Privileges tab, select Access and manage Logging Service to allow the associated group members to access and manage Logging Service.
Note: You can remove the privileges from the group by clearing Access and manage Logging Service.
Result: Appropriate privilege is granted to the group and the users in the respective group.
For improved user management and delegation, you can establish sub-users and grant them the necessary permissions to use the Logging Service. Sub-users can be given varying levels of access for their segment of the logging pipeline by the primary account owners. Only the pipelines that the primary account owner has assigned to sub-users are visible to them and can be managed, but they cannot access the primary account or pipelines created by other sub-users.
To create sub-users, follow these steps:
In the DCD menu, select Management > Users & Groups under Users.
Select the Groups tab in the User Manager window.
Select the appropriate group to assign relevant privileges.
In the Members tab, click + Add User and select a user(s) from the list.
To delete an associated sub-user, click Remove User.
Result: A sub-user(s) is created.
An observability platform is necessary for visualization and reporting purposes. IONOS uses to enable you to meet your visualization needs.
Note:
Once you have created a pipeline, you can access your Grafana instance by logging in with your IONOS credentials.
Your Grafana configurations and organization are associated with your contract number. If you have configured users with unique email addresses, they can access Grafana only with those specific email addresses linked to the contract number. Users cannot log into Grafana using any other email address unrelated to the contract.
Sub-users in your contract number can access Grafana with their IONOS credentials, subject to the following conditions:
At least one pipeline must be connected to the contract number.
Sub-users must be active and have the to access Logging Service.
If you are unable to log into Grafana, ensure that you have checked the following:
The sub-user is active and has the necessary privileges to access the Logging Service.
The contract number contains at least one logging pipeline.
The sub-user uses the correct email address associated with the contract number.
If the issue persists, contact .
You can obtain your Grafana instance address by sending a GET
request to the server. For more information, see . The response contains the grafanaAddress
. This information remains the same for all your logging pipelines.
For more information, see .
You may notice that the pipeline's status is temporarily set to the PROVISIONING
state while provisioning is in process. A GET
request retrieves information about the pipeline and its status. For more information, see .
Log in to the with your username and password.
Authorization
yes
string
A Bearer $TOKEN
is a string that is tied to your account. For information on generating tokens, see Create New Tokens.
Content-Type
yes
string
Set this to application/json
.
Create an instance of a logging pipeline.
Obtain a new key for a logging pipeline.
Update an existing logging pipeline.
Retrieve information about a specific logging pipeline.
Delete a specific logging pipeline.
Customize the retention policy for each log stream.
Use the pre-configured IONOS Telemetry API datasource to query metrics from the IONOS Cloud Telemetry API.
You can modify your logging pipeline by sending a PATCH
request with a specific pipeline ID.
Note: To modify a logging pipeline, you can use the same payload that you use in the POST
request for creating a logging pipeline. For more information, see Set Up a Logging Pipeline Instance.
The following is a sample request. Remember to replace the {pipelineID}
with a valid ID of the respective logging pipeline.
To retrieve your logging pipeline information, you need the ID of the respective pipeline.
The following is a sample request. Remember to replace the {pipelineID}
with a valid ID of the specific pipeline whose information you want to access.
If your request is successful, you will receive the relevant information on a Ready
pipeline.
To access logs using Fluent Bit, you need the following information:
tcpAddress
Set the TCP log server address during Fluent Bit configuration.
httpAddress
Set the HTTP log server address during Fluent Bit configuration.
tag
Set the tag during Fluent Bit configuration. Remember to use the same tag you defined while creating the pipeline.
Note: A key is necessary to send logs through the newly formed logging pipeline. For more information about obtaining a key, see Obtain a New Key.
Each log stream in your pipeline is initially assigned a default data retention policy of 30 days. The logs for each log stream are retained for the specified number of days. However, you can define a custom retention policy for each log stream. The available options for retention periods include 7, 14, and 30 days.
Note: You can alternatively use the PATCH
request to update the retention policy of an existing pipeline.
The following is an example:
To set the retention period for each log stream to unlimited, use the following request:
Note: The retention period is set to unlimited when the retentionInDays
value is set to 0
.
In Grafana, you can only access the data for a maximum of 30 days. To access the archived data (logs with unlimited retention), contact IONOS Cloud Support.
You must provide the following information while requesting access:
Contract number
Pipeline ID
Log stream tag
The date range for which you need access to the data
S3 bucket information:
Name
Region
S3 endpoint
Note: Ensure that the S3 Canonical User ID you receive from IONOS Cloud Support is assigned to the designated bucket. Additionally, confirm that write access has been granted to this specific bucket.
After receiving the required information, IONOS transfers the archived data into the designated S3 bucket. Following the successful upload, the data becomes accessible from within the S3 bucket. To add the grantee to your bucket, see the instructions.
Note: Storing logs for an indefinite period will increase storage costs. Our pricing for long-term log storage aligns with our standard S3 pricing model.
The Logging Service platform provides a centralized and scalable solution for collecting, monitoring, and analyzing logs from your application and infrastructure. It utilizes specialized components to aggregate, parse, and store the logs seamlessly.
The product contains enhancements that allow additional users under your contract number to create pipelines and log in to Grafana with their IONOS credentials. Additionally, you can also enable group members to access and manage Logging Service by providing specific access rights to the group. For more information, see Set User Privileges for Logging Service.
Logging Service has its own set of limitations. For more information, see Logging Service Limitations.
Your data is secure and not shared with users using the same platform. The following defines the level of security provided by the platform:
Each pipeline uses its unique endpoint to send logs to the log server over the Transport Layer Security (TLS).
Each user has a separate partition because data is kept in different partitions.
Each chunk of data within the partition is secured by server-side and client-side encryptions.
Yes, we recommend configuring firewall rules using the authorized IP addresses associated with each endpoint to restrict the traffic to Logging Service endpoints. The following is a list of locations and their specific IP addresses:
Berlin
https://logging.de-txl.ionos.com
85.215.196.160
85.215.203.152
85.215.248.62
85.215.248.97
Logrono
https://logging.es-vit.ionos.com
194.164.165.102
85.215.203.152
85.215.248.62
85.215.248.97
London
https://logging.gb-lhr.ionos.com
217.160.157.83
217.160.157.82
217.160.157.89
82.165.230.137
Frankfurt
https://logging.de-fra.ionos.com
82.215.75.93
85.215.75.92
217.160.218.121
217.160.218.124
Paris
https://logging.fr-par.ionos.com
5.250.177.220
5.250.178.35
5.250.177.226
5.250.178.34
The IONOS Logging Service provides a centralized and scalable solution for logging, monitoring, and analyzing your application and infrastructure logs. It offers a wide range of features to help you collect and analyze your logs from different log sources, monitor performance effectively, and gain insights into your system's behavior.
Note: Logging Service is currently available only through the API, without the DCD implementation.
Get an overview of the Logging Service.
An overview of the product and its components.
Information about its features and benefits.
Information about various use cases of Logging Service.
Information about log collection, log agent, and Fluent Bit configuration.
Information about various log sources.
Information about the log pipelines.
Information about safety mechanisms for logs pushed from various sources.
Set relevant privileges via the DCD.
Learn how to set User privileges for Logging Service.
Get started with Logging Service via the API.
Create an instance of a logging pipeline.
Obtain a new key for a logging pipeline.
Update an existing logging pipeline.
Retrieve information about a specific logging pipeline.
Delete a specific logging pipeline.
Customize the retention policy for each log stream.
Use the pre-configured IONOS Telemetry API datasource to query metrics from the IONOS Cloud Telemetry API.
Steps to send logs to the logging platform.
Steps to access logs from the logging platform.
To get answers to the most commonly encountered questions about the Logging Service platform, see Logging Service FAQs.
The Telemetry API is an API that allows you to interact with the IONOS Cloud Telemetry service, and it is compatible with Prometheus specifications.
The Telemetry API allows retrieval of instance metrics; it is a read-only API and does not support any write operations. Although the Prometheus specification contains many more API resources and operations, the Telemetry API selectively supports the following GET operations at the moment:
The managed Grafana in the Logging Service comes with a pre-configured datasource for the Telemetry API called IONOS Telemetry. You can use this datasource to query metrics from the IONOS Cloud Telemetry API.
The Telemetry API uses the same authentication as the IONOS Cloud API. You can use the same API token to authenticate with the Telemetry API. This means you need to update the IONOS Telemetry datasource with your API token:
Follow the instructions in Create new tokens to generate a token.
Once the header is configured, select Save & test.
Logging Service is a cloud-based service that allows you to store, manage, and analyze logs generated by your applications or systems. It is a scalable and cost-effective solution for organizations that need to manage and analyze large volumes of logs.
Logging Service is also an efficient log management solution, which contains several important features and benefits that can help organizations improve their log management practices and troubleshoot issues more quickly.
Log data is the key to an early detection of security incidents; thus, using the Logging Service can improve your operational efficiency, enhance your security, and increase your visibility into your system's performance and errors. Log data is available 24/7, secure, high-performance, and scalable. Your data is encrypted before, during, and after it is transmitted. The Fluent Bit agent gathers log data locally, while the Logging Service forms the basis for analysis and better visualization of your logs with increased security.
The managed Grafana in the Logging Service comes with a pre-configured datasource for the Telemetry API called IONOS Telemetry. You can use this datasource to query metrics from the IONOS Cloud Telemetry API. For more information, see Integration with IONOS Telemetry API.
Logging Service also enables you to configure an unlimited log retention period for your logs. For more information, see Modify the Log Retention Policy.
The architecture of the Logging Service includes the following three main components that can be used to aggregate logs from various sources, analyze the gathered logs, and create visualizations for monitoring and report generation.
Data Collection: Data is collected from various log sources, such as applications, servers, and network devices, which are sent to a centralized Logging Service platform. Currently, the following log sources are supported: Kubernetes, Docker, Linux Systemd, HTTP (JSON REST API), and Generic.
Logging Service Platform: The Logging Service platform stores the logs gathered from various sources in a centralized location for easy access. This data can be accessed for analysis and troubleshooting. The platform includes log search, routing, storage, analytics, and visualization features. For more information, see Log Collection.
Analytics and Visualization: Grafana, an analytics and visualization tool allows you to analyze the log data to identify patterns and trends and visualize the log data or generate reports. You can also use these reports to secure your log data from threats or troubleshoot underlying issues.
The illustration shows the default components of the Logging Service platform and the following is a brief description of the components:
Systems (1, 2, and n): These are the various log sources with Fluent Bit installed to gather, parse, and redirect logs to the Logging Service platform.
Logging Service Platform:
It contains an in-built log aggregator, Fluentd, easily compatible with various sources and targets. Hence, shipping logs from the source to the destination is hassle-free. It aggregates, processes, and ships data to the stipulated target.
Fluentd feeds logs to Loki, which stores and aggregates them before forwarding them to the visualization tool Grafana. Loki also works as an aggregation tool that indexes and groups log streams based on the labels.
The logs are displayed in the Grafana dashboard. You may generate reports, edit your dashboards according to your needs, and visualize the data accordingly.
The following are the key limitations of Logging Service:
Aspect
Description
Limit
Service Access
Means of creating and managing pipelines.
REST API only
HTTP Rate Limit
Default rate limit for HTTP requests per pipeline during log ingestion.
50 requests/second
TCP Bandwidth
Default TCP bandwidth limit per pipeline, approximately in terms of logs per second.
~10,000 logs/second
Maximum Pipelines
The maximum number of pipelines allowed per contract.
5 pipelines
Log Streams per Pipeline
The maximum number of log streams allowed per pipeline.
10 log streams/pipeline