Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
IONOS Event Streams for Apache Kafka is a fully managed Apache Kafka service. Apache Kafka is an open-source event streaming platform capable of handling trillions of daily events. With IONOS Event Streams for Apache Kafka, users can easily set up, scale, and operate Kafka clusters without the need to manage the underlying infrastructure.
At its core, the goal of Event Streams for Apache Kafka is to democratize data streaming and make it accessible to organizations of all sizes and industries. We recognize that Kafka's capabilities are vast, but its successful deployment and ongoing management have traditionally required a deep pool of specialized expertise. With this product, we aim to bridge the gap between Kafka's potential and its practical utilization. By offering managed Event Streams for Apache Kafka service in the cloud, we empower businesses to focus on innovation and value creation rather than the intricacies of infrastructure management.
Event Streams for Apache Kafka API
Explore the API Specification of Event Streams for Apache Kafka
SDKs
Explore the SDKs of Event Streams for Apache Kafka
Terraform
Explore Terraform for Event Streams for Apache Kafka
Overview
Discover features, benefits, and use cases for streamlined data streaming.
Use Cases
Explore the key use cases to implement Event Streams for Apache Kafka
DCD How-Tos
Get started with creating and managing Event Streams for Apache Kafka via the DCD.
API How-Tos
Get started with creating and managing Event Streams for Apache Kafka via the API.
As with our other services, IONOS Event Streams for Apache Kafka is fully integrated into the Data Center Designer and has a dedicated API.
You can provision a robust cluster composed of multiple redundant nodes designed to maintain continuous operation, even in the event of individual node failures. This setup includes automatic failover mechanisms to ensure high availability and minimize downtime. For more comprehensive information on configuring and managing these features, please see our High Availability and Scaling documentation.
Explore the powerful features and benefits of IONOS Event Streams for Apache Kafka. This fully-managed service offers high throughput, low latency, scalability, and robust security features for all your data streaming and real-time analytics needs. Learn more about how IONOS Event Streams for Apache Kafka can transform your data infrastructure in the Features and Benefits page.
IONOS Kafka is suitable for various use cases such as real-time data processing, event-driven architectures, log aggregation, monitoring, and many more where high-throughput, fault tolerance, and real-time streaming data processing are required. Visit our Use Cases section for more information.
The IONOS Event Streams for Apache Kafka service is designed to support the needs of your applications and development cycles. At this time, we support version 3.7.0
, ensuring a stable and optimized experience for all users.
IONOS offers a variety of cluster sizes tailored to different application needs, from development to enterprise-level deployments. Each cluster size is designed with specific hardware configurations to ensure optimal performance and capacity. For a detailed breakdown of our cluster sizes and their respective configurations, including node count, cores, RAM, and storage, please refer to our comprehensive Cluster Sizes section.
Our cloud-hosted Kafka service is designed to provide high availability and low-latency access to your data, regardless of where your applications are hosted. We offer Kafka clusters in multiple geographical regions to ensure optimal performance and compliance with local data regulations. The following locations are currently available:
Berlin, Germany (de-txl)
Frankfurt, Germany (de-fra)
Security is a paramount consideration for any cloud-hosted service, and IONOS Event Streams for Kafka offering is designed with multiple layers of security to protect your data and ensure compliance with industry standards. We provide a comprehensive suite of security features to safeguard your Kafka clusters against unauthorized access, data breaches, and other security threats.
Encrypted Communication: Our Kafka service supports Transport Layer Security (TLS) to encrypt data in transit. This ensures that all communication between clients and Kafka brokers, as well as between Kafka brokers themselves, is securely encrypted, preventing eavesdropping and man-in-the-middle attacks.
Beyond encryption, TLS also provides robust authentication mechanisms. By using TLS certificates, we ensure that both the client and server can verify each other's identity, adding an extra layer of security to prevent unauthorized access to your Kafka cluster.
High availability is critical for applications that require uninterrupted data flow and processing. Our Kafka service is designed to deliver robust fault tolerance and automatic recovery mechanisms to keep your data pipelines resilient.
You can provision a cluster with multiple redundant nodes, ensuring that the failure of a single node does not impact the overall availability of the service. This redundancy is pivotal in maintaining data integrity and continuous service operation.
Our service includes automatic failover capabilities, which promptly redirect traffic to healthy nodes in the event of a failure. This mechanism minimizes downtime and ensures your applications remain unaffected by individual node outages.
You can set the replication factor for your Kafka topics to determine how many copies of each message are stored across different brokers. A higher replication factor enhances fault tolerance by ensuring that even if one or more brokers fail, your data remains available.
Our Kafka service provides extensive configuration options to fine-tune your deployment.
You can configure the number of partitions for each topic, allowing for parallel processing and increasing throughput. More partitions enable better load balancing across consumers, improving the overall performance and scalability of your Kafka cluster.
The retention time determines how long messages are retained in a topic before being discarded. You can adjust the retention time to suit your application's data lifecycle needs, ensuring that data is available for as long as necessary without overwhelming your storage capacity.
Along with retention time, you can set the retention size, which limits the total amount of data stored for a topic. Once the size limit is reached, older messages are purged to make room for new ones. This setting helps manage storage usage and costs effectively.
In various industries, the IONOS Kafka service plays a pivotal role in enabling scalable and real-time data management solutions. Below, we explore two compelling use cases where organizations leverage IONOS Kafka to handle complex data challenges, achieve operational efficiency, and drive actionable insights.
Overview: A global e-commerce platform relies on our cloud-hosted Kafka service to manage and process real-time data streams efficiently. With millions of transactions occurring daily across various regions, the platform needs a robust solution to handle data ingestion, processing, and analysis in real-time.
Challenge: The e-commerce platform faces challenges in aggregating and processing a vast amount of real-time data from multiple sources, including customer interactions, inventory updates, and transaction logs. Traditional database systems struggle to handle the volume and velocity of incoming data, leading to latency issues and scalability limitations.
Solution: By leveraging the IONOS Events for Apache Kafka service, the platform establishes a scalable and fault-tolerant data pipeline. They deploy Kafka clusters in multiple regions to ensure low-latency data processing closer to their users. Producers within their ecosystem, such as mobile apps and web services, stream data into Kafka topics in real-time. Kafka's distributed architecture and partitioning capabilities enable parallel data processing, ensuring high throughput and low latency for consumers downstream.
Implementation:
Cluster Configuration: They opt for an XL-sized Kafka cluster with multiple nodes, high CPU, RAM, and storage capacity to handle peak data loads.
Stream Processing: Apache Kafka Streams API enables real-time processing and analytics directly within the Kafka ecosystem. They implement complex event processing (CEP) to derive actionable insights, such as personalized recommendations and fraud detection, in real-time.
Benefits:
Scalability: Kafka's horizontal scaling capabilities allow the platform to handle increasing data volumes and peak traffic periods without compromising performance.
Real-time Insights: By processing data in real-time, the platform gains actionable insights faster, enhancing customer experience and operational efficiency.
Reliability: IONOS Kafka's fault-tolerant architecture ensures data durability and continuous availability, reducing the risk of data loss or downtime.
Cost Efficiency: Optimized resource allocation and efficient data processing translate to cost savings compared to traditional data processing solutions.
Overview: A smart city initiative utilizes our IONOS Kafka service to manage and analyze IoT data generated by sensors deployed across the city. The initiative aims to improve urban planning, public safety, and resource management through data-driven insights and real-time monitoring.
Challenge: The smart city faces challenges in managing and processing vast amounts of real-time data generated by IoT devices, including traffic sensors, environmental monitors, and public safety cameras. They require a scalable and reliable solution to ingest, process, and analyze this diverse data in real-time to make informed decisions and respond to events promptly.
Solution: Our Kafka service provides a robust foundation for their IoT data management and analytics platform. They deploy Kafka clusters in a distributed architecture across the city's data centers and edge locations to ensure proximity to data sources and reduce latency. IoT devices stream data continuously into Kafka topics, where it is processed and analyzed in real-time to derive actionable insights.
Implementation:
Real-time Analytics: They leverage Kafka Streams and Apache Flink for stream processing and complex event processing (CEP) to detect anomalies, predict traffic patterns, and optimize resource allocation in real-time.
Integration with AI/ML: They integrate Kafka with AI/ML pipelines to perform predictive analytics and automate decision-making processes based on real-time insights.
Security and Compliance: Kafka's robust security features, including TLS encryption, authentication, and authorization mechanisms, ensure data confidentiality and compliance with regulatory requirements.
Benefits:
Operational Efficiency: Real-time data processing and analytics enable proactive management of city resources, improving efficiency and responsiveness to citizen needs.
Enhanced Safety: Real-time monitoring and predictive analytics help identify potential safety hazards, enabling quick response and mitigation measures.
Scalability: Kafka's horizontal scaling capabilities accommodate the growth of IoT devices and data volume, ensuring scalability without compromising performance.
Data-driven Decision Making: By harnessing real-time insights, the smart city makes data-driven decisions that optimize infrastructure usage and enhance quality of life for residents.
IONOS Event Streams for Apache Kafka offers a range of cluster sizes to meet the diverse needs of different applications, from development and testing environments to large-scale production deployments. Each cluster size is designed with specific hardware configurations to provide the right balance of performance, capacity, and cost-efficiency.
Node Count: 3
Cores per Node: 1
RAM per Kafka Broker: 2 GB
Storage per Broker: 195 GB
Total Storage: 585 GB
The XS cluster is ideal for development, testing, and small-scale applications that require a modest amount of resources. It provides sufficient capacity to handle light workloads while maintaining cost efficiency.
Node Count: 3
Cores per Node: 2
RAM per Kafka Broker: 4 GB
Storage per Broker: 250 GB
Total Storage: 750 GB
The S cluster is suitable for small to medium-sized applications that need moderate resources. It offers enhanced performance and storage capacity compared to the XS cluster, making it a good choice for applications with higher throughput and storage requirements.
Node Count: 3
Cores per Node: 2
RAM per Kafka Broker: 8 GB
Storage per Broker: 400 GB
Total Storage: 1200 GB
The M cluster is designed for medium-sized applications that demand higher performance and greater storage capacity. It provides a balanced configuration that can handle increased data volumes and more intensive processing tasks.
Node Count: 3
Cores per Node: 4
RAM per Kafka Broker: 16 GB
Storage per Broker: 800 GB
Total Storage: 2400 GB
The L cluster is well-suited for large applications with high throughput and substantial storage needs. With more cores and RAM per broker, this cluster size delivers superior performance and can support more demanding workloads.
Node Count: 3
Cores per Node: 8
RAM per Kafka Broker: 32 GB
Storage per Broker: 1500 GB
Total Storage: 4500 GB
The XL cluster is designed for enterprise-level applications and extremely high-throughput environments. It offers the highest performance and storage capacity, ensuring that even the most demanding applications can run smoothly and efficiently.
IONOS Event Streams for Apache Kafka allows you to choose the cluster size that best fits your current needs while providing the flexibility to scale as your requirements evolve. You can start with a smaller cluster and easily upgrade to a larger size as your data volumes and processing demands increase. This flexibility ensures that you can optimize costs while maintaining the ability to grow your Kafka deployment seamlessly.
To determine the best cluster size for your needs, consider your application’s data throughput, processing requirements, and storage demands. For more detailed guidance, please refer to our Cluster Sizing Guide or contact our support team for personalized assistance. By selecting the appropriate cluster size, you can ensure that your Kafka deployment is both cost-effective and capable of meeting your application’s performance requirements.
Before setting up a Kafka Cluster, ensure that you are working within a provisioned Virtual Data Center (VDC) that contains at least one VM in a private LAN, which will access the cluster via the private LAN. The VM you create is counted against the quota allocated in your contract.
To create a Kafka cluster, follow these steps:
Log in to the DCD with your username and password.
Go to Menu > Analytics > Event Streams for Apache Kafka.
Click Create cluster.
Configure your cluster properties.
Cluster Name: Enter a unique name for your cluster. This name will help you identify the cluster in the dashboard.
Kafka Version: Select the Kafka version you wish to use. Currently, only version 3.7.0 is supported.
Cluster Size: Choose the appropriate size for your cluster:
XS (Extra Small): Suitable for small-scale testing and development environments.
S (Small): Ideal for slightly larger workloads with moderate throughput needs.
M (Medium): Designed for medium-sized workloads with significant data volume and consistent throughput requirements.
L (Large): Suitable for large-scale applications that require high throughput and robust fault tolerance.
XL (Extra Large): Tailored for enterprise-grade, high-performance environments with extremely high throughput and storage needs.
For more details on sizing, refer to the Cluster Sizes section.
Location: Select the region for your cluster. At present, only Germany / Berlin is available.
Choose a datacenter and a LAN.
Datacenter: Select the datacenter and region where you want to create your cluster. The region you choose determines the geographical placement of your Kafka cluster, affecting latency and compliance with local regulations.
Datacenter LAN: Your cluster will be connected to this network.
Configure the addresses for your Kafka brokers. These addresses will be used by clients to connect to the Kafka cluster. Ensure that the addresses are correctly configured to allow for seamless communication between the brokers and your clients.
The IP addresses you choose must belong to the same subnet as the LAN you selected in the previous step. This ensures proper network routing and communication within the cluster. To find out your subnet IP follow the instructions on the DCD screen.
Info: The Estimated costs will be displayed based on the input. It is an estimate and certain variables such as traffic are not considered.
Click Save to deploy your cluster.
Result: Your cluster is now being deployed. You can monitor its progress by returning to the Event Streams for Apache menu.
The Kafka Cluster Deletion feature allows users to remove a Kafka cluster from their environment. Follow the steps below to delete a Kafka cluster.
Warning:
Deleting a Kafka cluster is irreversible. Once a cluster is deleted, all the data and configurations contained within it will be permanently removed.
Ensure that you have backed up any necessary data before deleting the cluster.
Log in to the DCD with your username and password.
Go to Menu > Analytics > Event Streams for Apache Kafka.
Identify the Kafka cluster you wish to delete from the list. Ensure that you are selecting the correct cluster.
From the drop-down menu, click Delete.
A confirmation dialog will appear, asking you to confirm the deletion. Ensure that you understand this action is irreversible and will permanently delete the cluster and all its data. Confirm the deletion to proceed.
Result: The cluster will be permanently deleted.
Click located in the Options column for the desired Kafka cluster.
Create a Kafka Cluster
Learn how to set up a Kafka cluster
View Kafka Clusters
Learn how to view the list of Kafka clusters
Create a Kafka Topic
Learn how to create a Kafka topic
Delete a Kafka Topic
Learn how to delete a Kafka topic
Delete a Kafka Cluster
Learn how to delete a Kafka cluster
Allows you to delete a Kafka cluster based on its ID.
DELETE /clusters/{clusterId}
The DELETE /clusters/{clusterId}
endpoint initiates the deletion of a Kafka cluster identified by its unique UUID (clusterId
). Upon successful deletion request, the endpoint returns a 200 Successful operation
status code, indicating that the cluster deletion process has been initiated.
This action permanently removes the specified Kafka cluster and all associated resources. Use caution when invoking this endpoint as it cannot be undone.
Use this endpoint to manage and decommission Kafka clusters within your environment, ensuring efficient resource utilization and lifecycle management.
The following fields are mandatory to make authenticated requests to the API in the request header:
Accept
yes
string
Set this to application/json
.
Authorization
yes
string
Provide a header value as Bearer
followed by your token
.
clusterId
Yes
string
The UUID of the Kafka cluster.
Example: e69b22a5-8fee-56b1-b6fb-4a07e4205ead
200 Successful operation
The request to delete the cluster was successful.
You can create a new Kafka cluster with specified configurations.
Note:
Only contract administrators, owners, and users with Access and manage Event Streams for Apache Kafka privileges can create and manage Kafka clusters.
After creating the cluster, you can use it via the corresponding LAN and certificates.
The data center must be provided as an UUID
. The easiest way to retrieve the UUID
is through the Cloud API.
POST /clusters
The POST /clusters
endpoint allows you to create a new Kafka cluster with specified properties. The name, version, size, and connection fields are required. The response includes the newly created cluster's ID, metadata, and properties, along with its current state and broker addresses.
Use this endpoint to provision a Kafka cluster tailored to your application's requirements, ensuring seamless integration and efficient data management.
To make authenticated requests to the API, the following fields are mandatory in the request header:
Content-Type
yes
string
Set this to application/json
.
Accept
yes
string
Set this to application/json
.
Authorization
yes
string
Provide a header value as Bearer
followed by your token
.
Below is the list of mandatory body parameters:
name
string
The name of the Kafka cluster.
my-kafka-cluster
version
string
The version of Kafka to use for the cluster.
3.7.0
size
string
The size of the Kafka cluster.
S
datacenterId
string
The UUID of the data center where the cluster will be created.
5a029f4a-72e5-11ec-90d6-0242ac120003
lanId
string
The LAN ID where the cluster will be connected.
2
brokerAddresses
array
List of broker addresses for the cluster.
["192.168.1.101/24","192.168.1.102/24","192.168.1.103/24"]
200 Successful operation