Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
High availability is critical for applications that require uninterrupted data flow and processing. Our Kafka service is designed to deliver robust fault tolerance and automatic recovery mechanisms to keep your data pipelines resilient.
You can provision a cluster with multiple redundant nodes, ensuring that the failure of a single node does not impact the overall availability of the service. This redundancy is pivotal in maintaining data integrity and continuous service operation.
Our service includes automatic failover capabilities, which promptly redirect traffic to healthy nodes in the event of a failure. This mechanism minimizes downtime and ensures your applications remain unaffected by individual node outages.
You can set the replication factor for your Kafka topics to determine how many copies of each message are stored across different brokers. A higher replication factor enhances fault tolerance by ensuring that even if one or more brokers fail, your data remains available.
Our Kafka service provides extensive configuration options to fine-tune your deployment.
You can configure the number of partitions for each topic, allowing for parallel processing and increasing throughput. More partitions enable better load balancing across consumers, improving the overall performance and scalability of your Kafka cluster.
The retention time determines how long messages are retained in a topic before being discarded. You can adjust the retention time to suit your application's data lifecycle needs, ensuring that data is available for as long as necessary without overwhelming your storage capacity.
Along with retention time, you can set the retention size, which limits the total amount of data stored for a topic. Once the size limit is reached, older messages are purged to make room for new ones. This setting helps manage storage usage and costs effectively.
As with our other services, IONOS Event Streams for Apache Kafka is fully integrated into the Data Center Designer and has a dedicated API.
You can provision a robust cluster composed of multiple redundant nodes designed to maintain continuous operation, even in the event of individual node failures. This setup includes automatic failover mechanisms to ensure high availability and minimize downtime. For more comprehensive information on configuring and managing these features, please see our High Availability and Scaling documentation.
Explore the powerful features and benefits of IONOS Event Streams for Apache Kafka. This fully-managed service offers high throughput, low latency, scalability, and robust security features for all your data streaming and real-time analytics needs. Learn more about how IONOS Event Streams for Apache Kafka can transform your data infrastructure in the Features and Benefits page.
IONOS Kafka is suitable for various use cases such as real-time data processing, event-driven architectures, log aggregation, monitoring, and many more where high-throughput, fault tolerance, and real-time streaming data processing are required. Visit our Use Cases section for more information.
The IONOS Event Streams for Apache Kafka service is designed to support the needs of your applications and development cycles. At this time, we support version 3.7.0
, ensuring a stable and optimized experience for all users.
IONOS offers a variety of cluster sizes tailored to different application needs, from development to enterprise-level deployments. Each cluster size is designed with specific hardware configurations to ensure optimal performance and capacity. For a detailed breakdown of our cluster sizes and their respective configurations, including node count, cores, RAM, and storage, please refer to our comprehensive Cluster Sizes section.
Our cloud-hosted Kafka service is designed to provide high availability and low-latency access to your data, regardless of where your applications are hosted. We offer Kafka clusters in multiple geographical regions to ensure optimal performance and compliance with local data regulations. The following locations are currently available:
Berlin, Germany (de-txl)
Frankfurt, Germany (de-fra)
Security is a paramount consideration for any cloud-hosted service, and IONOS Event Streams for Kafka offering is designed with multiple layers of security to protect your data and ensure compliance with industry standards. We provide a comprehensive suite of security features to safeguard your Kafka clusters against unauthorized access, data breaches, and other security threats.
Encrypted Communication: Our Kafka service supports Transport Layer Security (TLS) to encrypt data in transit. This ensures that all communication between clients and Kafka brokers, as well as between Kafka brokers themselves, is securely encrypted, preventing eavesdropping and man-in-the-middle attacks.
Beyond encryption, TLS also provides robust authentication mechanisms. By using TLS certificates, we ensure that both the client and server can verify each other's identity, adding an extra layer of security to prevent unauthorized access to your Kafka cluster.
In various industries, the IONOS Kafka service plays a pivotal role in enabling scalable and real-time data management solutions. Below, we explore two compelling use cases where organizations leverage IONOS Kafka to handle complex data challenges, achieve operational efficiency, and drive actionable insights.
Overview: A global e-commerce platform relies on our cloud-hosted Kafka service to manage and process real-time data streams efficiently. With millions of transactions occurring daily across various regions, the platform needs a robust solution to handle data ingestion, processing, and analysis in real-time.
Challenge: The e-commerce platform faces challenges in aggregating and processing a vast amount of real-time data from multiple sources, including customer interactions, inventory updates, and transaction logs. Traditional database systems struggle to handle the volume and velocity of incoming data, leading to latency issues and scalability limitations.
Solution: By leveraging the IONOS Events for Apache Kafka service, the platform establishes a scalable and fault-tolerant data pipeline. They deploy Kafka clusters in multiple regions to ensure low-latency data processing closer to their users. Producers within their ecosystem, such as mobile apps and web services, stream data into Kafka topics in real-time. Kafka's distributed architecture and partitioning capabilities enable parallel data processing, ensuring high throughput and low latency for consumers downstream.
Implementation:
Cluster Configuration: They opt for an XL-sized Kafka cluster with multiple nodes, high CPU, RAM, and storage capacity to handle peak data loads.
Stream Processing: Apache Kafka Streams API enables real-time processing and analytics directly within the Kafka ecosystem. They implement complex event processing (CEP) to derive actionable insights, such as personalized recommendations and fraud detection, in real-time.
Benefits:
Scalability: Kafka's horizontal scaling capabilities allow the platform to handle increasing data volumes and peak traffic periods without compromising performance.
Real-time Insights: By processing data in real-time, the platform gains actionable insights faster, enhancing customer experience and operational efficiency.
Reliability: IONOS Kafka's fault-tolerant architecture ensures data durability and continuous availability, reducing the risk of data loss or downtime.
Cost Efficiency: Optimized resource allocation and efficient data processing translate to cost savings compared to traditional data processing solutions.
Overview: A smart city initiative utilizes our IONOS Kafka service to manage and analyze IoT data generated by sensors deployed across the city. The initiative aims to improve urban planning, public safety, and resource management through data-driven insights and real-time monitoring.
Challenge: The smart city faces challenges in managing and processing vast amounts of real-time data generated by IoT devices, including traffic sensors, environmental monitors, and public safety cameras. They require a scalable and reliable solution to ingest, process, and analyze this diverse data in real-time to make informed decisions and respond to events promptly.
Solution: Our Kafka service provides a robust foundation for their IoT data management and analytics platform. They deploy Kafka clusters in a distributed architecture across the city's data centers and edge locations to ensure proximity to data sources and reduce latency. IoT devices stream data continuously into Kafka topics, where it is processed and analyzed in real-time to derive actionable insights.
Implementation:
Real-time Analytics: They leverage Kafka Streams and Apache Flink for stream processing and complex event processing (CEP) to detect anomalies, predict traffic patterns, and optimize resource allocation in real-time.
Integration with AI/ML: They integrate Kafka with AI/ML pipelines to perform predictive analytics and automate decision-making processes based on real-time insights.
Security and Compliance: Kafka's robust security features, including TLS encryption, authentication, and authorization mechanisms, ensure data confidentiality and compliance with regulatory requirements.
Benefits:
Operational Efficiency: Real-time data processing and analytics enable proactive management of city resources, improving efficiency and responsiveness to citizen needs.
Enhanced Safety: Real-time monitoring and predictive analytics help identify potential safety hazards, enabling quick response and mitigation measures.
Scalability: Kafka's horizontal scaling capabilities accommodate the growth of IoT devices and data volume, ensuring scalability without compromising performance.
Data-driven Decision Making: By harnessing real-time insights, the smart city makes data-driven decisions that optimize infrastructure usage and enhance quality of life for residents.
IONOS Event Streams for Apache Kafka is a fully managed Apache Kafka service. Apache Kafka is an open-source event streaming platform capable of handling trillions of daily events. With IONOS Event Streams for Apache Kafka, users can easily set up, scale, and operate Kafka clusters without the need to manage the underlying infrastructure.
At its core, the goal of Event Streams for Apache Kafka is to democratize data streaming and make it accessible to organizations of all sizes and industries. We recognize that Kafka's capabilities are vast, but its successful deployment and ongoing management have traditionally required a deep pool of specialized expertise. With this product, we aim to bridge the gap between Kafka's potential and its practical utilization. By offering managed Event Streams for Apache Kafka service in the cloud, we empower businesses to focus on innovation and value creation rather than the intricacies of infrastructure management.
Event Streams for Apache Kafka
Allows you to retrieve the details of a specific Kafka cluster based on its ID.
GET /clusters/{clusterId}
The GET /clusters/{clusterId}
endpoint retrieves detailed information about a specific Kafka cluster identified by its unique UUID (clusterId). This endpoint returns metadata including creation and modification dates, ownership details, and current operational state. The properties section provides specific details such as the cluster name, version, size, and connection information including broker addresses and network configurations.
Use this endpoint to fetch comprehensive details about a Kafka cluster within your environment, facilitating effective management and monitoring of Kafka resources.
You can create a new Kafka cluster with specified configurations.
POST /clusters
The POST /clusters
endpoint allows you to create a new Kafka cluster with specified properties. The name, version, size and connection fields are required. The response includes the ID, metadata, and properties of the newly created cluster, along with its current state and broker addresses.
Use this endpoint to provision a Kafka cluster tailored to your application's requirements, ensuring seamless integration and efficient data management.
Path Parameters | Required | Type | Description |
---|
Body Parameter | Required | Type | Description |
---|
Explore the key use cases to implement Apache Kafka Streams.
Get started with creating and managing Apache Kafka Streams via the API.
| Yes | string | The UUID of the Kafka cluster to retrieve.
Example: |
| No | object | Optional metadata for the cluster. |
| Yes | object | Properties of the cluster to be created. |
| Yes | string | The name of the Kafka cluster. |
| Yes | string | The version of Kafka to use for the cluster. |
| Yes | string | The size of the Kafka cluster. |
| No | array | List of connections for the cluster. |
| Yes | string | The UUID of the data center where the cluster will be created. |
| Yes | string | The LAN ID where the cluster will be connected. |
| Yes | array | List of broker addresses for the cluster. |
IONOS Event Streams for Apache Kafka offers a range of cluster sizes to meet the diverse needs of different applications, from development and testing environments to large-scale production deployments. Each cluster size is designed with specific hardware configurations to provide the right balance of performance, capacity, and cost-efficiency.
Node Count: 3
Cores per Node: 1
RAM per Kafka Broker: 2 GB
Storage per Broker: 195 GB
Total Storage: 585 GB
The XS cluster is ideal for development, testing, and small-scale applications that require a modest amount of resources. It provides sufficient capacity to handle light workloads while maintaining cost efficiency.
Node Count: 3
Cores per Node: 2
RAM per Kafka Broker: 4 GB
Storage per Broker: 250 GB
Total Storage: 750 GB
The S cluster is suitable for small to medium-sized applications that need moderate resources. It offers enhanced performance and storage capacity compared to the XS cluster, making it a good choice for applications with higher throughput and storage requirements.
Node Count: 3
Cores per Node: 2
RAM per Kafka Broker: 8 GB
Storage per Broker: 400 GB
Total Storage: 1200 GB
The M cluster is designed for medium-sized applications that demand higher performance and greater storage capacity. It provides a balanced configuration that can handle increased data volumes and more intensive processing tasks.
Node Count: 3
Cores per Node: 4
RAM per Kafka Broker: 16 GB
Storage per Broker: 800 GB
Total Storage: 2400 GB
The L cluster is well-suited for large applications with high throughput and substantial storage needs. With more cores and RAM per broker, this cluster size delivers superior performance and can support more demanding workloads.
Node Count: 3
Cores per Node: 8
RAM per Kafka Broker: 32 GB
Storage per Broker: 1500 GB
Total Storage: 4500 GB
The XL cluster is designed for enterprise-level applications and extremely high-throughput environments. It offers the highest performance and storage capacity, ensuring that even the most demanding applications can run smoothly and efficiently.
IONOS Event Streams for Apache Kafka allows you to choose the cluster size that best fits your current needs while providing the flexibility to scale as your requirements evolve. You can start with a smaller cluster and easily upgrade to a larger size as your data volumes and processing demands increase. This flexibility ensures that you can optimize costs while maintaining the ability to grow your Kafka deployment seamlessly.
To determine the best cluster size for your needs, consider your application’s data throughput, processing requirements, and storage demands. For more detailed guidance, please refer to our Cluster Sizing Guide or contact our support team for personalized assistance. By selecting the appropriate cluster size, you can ensure that your Kafka deployment is both cost-effective and capable of meeting your application’s performance requirements.
Retrieves a specific user associated with a Kafka cluster, including access certificates in the metadata.
GET /clusters/{clusterId}/users/{userId}/access
The GET /clusters/{clusterId}/users/{userId}/access
endpoint retrieves a specific user identified by userId from the Kafka cluster identified by clusterId. The response includes detailed metadata about the user, including creation and modification timestamps, ownership information, and current operational state. Access credentials such as certificateAuthority, privateKey, and certificate are also provided to facilitate secure communication with the Kafka cluster. Use this endpoint to manage and obtain detailed information about users' credentials within your Kafka infrastructure.
clusterId
(string, path, required): The UUID of the Kafka cluster from which to retrieve the user.
Example: /clusters/e69b22a5-8fee-56b1-b6fb-4a07e4205ead/users/{userId}/access
userId
(string, path, required): The UUID of the user whose credentials and metadata are to be retrieved.
Example: d11db12c-2625-5664-afd4-a3599731b5af
Allows you to deletes a Kafka cluster based on its ID.
DELETE /clusters/{clusterId}
The DELETE /clusters/{clusterId}
endpoint initiates the deletion of a Kafka cluster identified by its unique UUID (clusterId
). Upon successful deletion, the endpoint returns a 202 Accepted
status code, indicating that the cluster deletion process has been initiated.
This action permanently removes the specified Kafka cluster and all associated resources. Use caution when invoking this endpoint as it cannot be undone.
Use this endpoint to manage and decommission Kafka clusters within your environment, ensuring efficient resource utilization and lifecycle management.
202 Accepted: The request to delete the cluster was successful.
With this endpoint you can retrieve a list of Kafka clusters based on specified filters.
GET /clusters
The GET /clusters
endpoint retrieves a collection of Kafka clusters based on specified filters. Use the filter.name
parameter to search for clusters containing a specific name (case insensitive). Use the filter.state
parameter to filter clusters based on their current state, such as AVAILABLE.
This endpoint provides essential information about each cluster, including its ID, metadata, properties, and connections. Use the returned data to manage and monitor Kafka clusters within your environment effectively.
This endpoint lets you fetch a list of all Kafka topics within a specified Kafka cluster.
GET /clusters/{clusterId}/topics
The GET /clusters/{clusterId}/topics
endpoint retrieves a collection of all Kafka topics within the specified Kafka cluster identified by clusterId. Each topic includes detailed metadata such as creation and modification dates, ownership details, and current operational state. Topic properties like name, replicationFactor, numberOfPartitions, and logRetention settings are also provided.
Use this endpoint to fetch and monitor all Kafka topics within your environment, enabling efficient management and monitoring of data streams and event processing.
With this endpoint you can retrieve details of a specific Kafka topic within a specified Kafka cluster.
GET /clusters/{clusterId}/topics/{topicId}
The GET /clusters/{clusterId}/topics/{topicId}
endpoint retrieves detailed information about a specific Kafka topic identified by topicId within the Kafka cluster specified by clusterId. The response includes metadata such as creation and modification dates, ownership details, and current operational state. Additionally, topic properties such as name, replicationFactor, numberOfPartitions, and logRetention settings are provided.
Use this endpoint to fetch specific details of Kafka topics, facilitating effective monitoring and management of individual topics within your Kafka cluster.
This endpoint allows you to retrieve all users associated with a specified Kafka cluster, supporting pagination and optional filters.
GET /clusters/{clusterId}/users
The GET /clusters/{clusterId}/users
endpoint retrieves a collection of users associated with the Kafka cluster specified by clusterId. The response includes a paginated list of user objects, each containing metadata such as creation and modification details, ownership information, and current operational state. Use this endpoint to manage and monitor users within your Kafka cluster efficiently.
Allows you to create a new Kafka topic within a specified Kafka cluster.
POST /clusters/{clusterId}/topics
The POST /clusters/{clusterId}/topics
endpoint creates a new Kafka topic within the specified Kafka cluster (clusterId
). The request body must include the name of the topic, the other parameters are optional.
Upon successful creation, the endpoint returns detailed information about the newly created topic, including its ID (id), metadata, and properties. Use this endpoint to dynamically manage Kafka topics within your environment, ensuring efficient data distribution and retention policies.
Use this endpoint to dynamically manage Kafka topics within your environment, ensuring efficient data distribution and retention policies.
Path Parameters | Required | Type | Description |
---|---|---|---|
Query Parameters | Required | Type | Description |
---|---|---|---|
Path Parameters | Required | Type | Description |
---|---|---|---|
Path Parameters | Required | Type | Description |
---|---|---|---|
Path Parameters | Required | Type | Description |
---|---|---|---|
Path Parameters | Required | Type | Description |
---|---|---|---|
Body Parameters | Required | Type | Description |
---|---|---|---|
clusterId
Yes
string
The UUID of the Kafka cluster to delete.
Example: /clusters/e69b22a5-8fee-56b1-b6fb-4a07e4205ead
Learn to create a Kafka cluster.
Learn to verify the status of a Kafka cluster.
Learn to get a list of all your Kafka clusters.
Learn to delete a Kafka cluster.
Learn to create new topics in your cluster.
Get a list of all topics in the cluster.
Get detailed information about a specific topic.
Get a list of the users in the cluster.
Learn to fetch a list of users and their credentials.
filter.name
No
string
Only return Kafka clusters that contain the given name. This filter is case insensitive.
Example: filter.name=my-kafka-cluster
filter.state
No
string
Only return Kafka clusters with a given state.
Example: filter.state=AVAILABLE
clusterId
Yes
string
The UUID of the Kafka cluster from which to retrieve topics.
Example: /clusters/e69b22a5-8fee-56b1-b6fb-4a07e4205ead/topics
clusterId
Yes
string
The UUID of the Kafka cluster where the topic belongs.
Example: /clusters/e69b22a5-8fee-56b1-b6fb-4a07e4205ead/topics/{topicId}
topicId
Yes
string
The UUID of the Kafka topic to retrieve details for.
Example: /clusters/{clusterId}/topics/ae085c4c-3626-5f1d-b4bc-cc53ae8267ce
clusterId
Yes
string
The UUID of the Kafka cluster from which to retrieve users.
Example: /clusters/e69b22a5-8fee-56b1-b6fb-4a07e4205ead/users
clusterId
Yes
string
The UUID of the Kafka cluster where the topic will be created.
Example: /clusters/e69b22a5-8fee-56b1-b6fb-4a07e4205ead/topics
metadata
No
object
Optional metadata for the topic.
properties
Yes
object
Properties of the topic to be created.
properties.name
Yes
string
The name of the Kafka topic.
properties.replicationFactor
No
number
The number of replicas for the topic. This determines the fault tolerance.
properties.numberOfPartitions
No
number
The number of partitions for the topic. This affects the parallelism and throughput.
properties.logRetention
No
object
Configuration for log retention policies.
properties.logRetention.retentionTime
No
number
The retention time for logs in milliseconds. Defaults to 604800000 (7 days).
properties.logRetention.segmentBytes
No
number
The maximum size of a log segment in bytes before a new segment is rolled. Defaults to 1073741824 (1 GB).