Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
High Throughput and Low Latency: Kafka is designed for high-throughput, low-latency data streaming, making it suitable for real-time data pipelines and event-driven architectures.
Durability and Reliability: Kafka ensures data durability by replicating messages across multiple brokers, ensuring that data is not lost even in case of broker failures.
Stream Processing: Kafka Streams API allows for real-time stream processing directly within Kafka, enabling complex transformations and aggregations on the fly.
Multi-tenancy: Kafka supports multi-tenancy by isolating different workloads and clients within the same cluster, providing resource quotas and access control.
Log Compaction: Kafka offers log compaction to retain only the latest value for a key, which is useful for scenarios requiring long-term storage of a compacted dataset.
Partitioning and Ordering: Kafka allows data partitioning for parallel processing and maintains the order of messages within a partition, ensuring consistency and ordered delivery.
Upgrades: IONOS Event Streams for Apache Kafka supports user-defined maintenance windows with minimal service disruption. The service ensures seamless upgrades with minimal downtime by adding and removing nodes dynamically.
Easy Configuration: Configure your Kafka clusters quickly and easily with IONOS Event Streams for Apache Kafka. Use the intuitive graphical interface, API commands, or SDKs to create topics, manage brokers, and assign permissions.
High Availability: Kafka's architecture ensures high availability with automatic failover and replication across multiple nodes, minimizing downtime and ensuring continuous data flow.
Security: Secure communication between clients and Kafka brokers using TLS encryption, along with robust authentication and authorization mechanisms, ensures data protection.
Programmatic Resource Management: Easily deploy and manage Kafka clusters in cloud environments using APIs, SDKs, and configuration management tools.
Resources: IONOS Event Streams for Apache Kafka offers dedicated resources, including CPU, storage, and RAM, with SSD storage for optimal performance.
Network: The service supports private LANs, ensuring secure and isolated network communication for your Kafka clusters.
As with our other services, IONOS Event Streams for Apache Kafka is fully integrated into the Data Center Designer and has a dedicated API.
You can provision a robust cluster composed of multiple redundant nodes designed to maintain continuous operation, even in the event of individual node failures. This setup includes automatic failover mechanisms to ensure high availability and minimize downtime. For more comprehensive information on configuring and managing these features, please see our High Availability and Scaling documentation.
Explore the powerful features and benefits of IONOS Event Streams for Apache Kafka. This fully-managed service offers high throughput, low latency, scalability, and robust security features for all your data streaming and real-time analytics needs. Learn more about how IONOS Event Streams for Apache Kafka can transform your data infrastructure in the Features and Benefits page.
IONOS Kafka is suitable for various use cases such as real-time data processing, event-driven architectures, log aggregation, monitoring, and many more where high-throughput, fault tolerance, and real-time streaming data processing are required. Visit our Use Cases section for more information.
The IONOS Event Streams for Apache Kafka service is designed to support the needs of your applications and development cycles. At this time, we support version 3.7.0
, ensuring a stable and optimized experience for all users.
IONOS offers a variety of cluster sizes tailored to different application needs, from development to enterprise-level deployments. Each cluster size is designed with specific hardware configurations to ensure optimal performance and capacity. For a detailed breakdown of our cluster sizes and their respective configurations, including node count, cores, RAM, and storage, please refer to our comprehensive Cluster Sizes section.
Our cloud-hosted Kafka service is designed to provide high availability and low-latency access to your data, regardless of where your applications are hosted. We offer Kafka clusters in multiple geographical regions to ensure optimal performance and compliance with local data regulations. The following locations are currently available:
Berlin, Germany (de-txl)
Frankfurt, Germany (de-fra)
Security is a paramount consideration for any cloud-hosted service, and IONOS Event Streams for Kafka offering is designed with multiple layers of security to protect your data and ensure compliance with industry standards. We provide a comprehensive suite of security features to safeguard your Kafka clusters against unauthorized access, data breaches, and other security threats.
Encrypted Communication: Our Kafka service supports Transport Layer Security (TLS) to encrypt data in transit. This ensures that all communication between clients and Kafka brokers, as well as between Kafka brokers themselves, is securely encrypted, preventing eavesdropping and man-in-the-middle attacks.
Beyond encryption, TLS also provides robust authentication mechanisms. By using TLS certificates, we ensure that both the client and server can verify each other's identity, adding an extra layer of security to prevent unauthorized access to your Kafka cluster.
IONOS Event Streams for Apache Kafka is a fully managed Apache Kafka service. Apache Kafka is an open-source event streaming platform capable of handling trillions of daily events. With IONOS Event Streams for Apache Kafka, users can easily set up, scale, and operate Kafka clusters without the need to manage the underlying infrastructure.
At its core, the goal of Event Streams for Apache Kafka is to democratize data streaming and make it accessible to organizations of all sizes and industries. We recognize that Kafka's capabilities are vast, but its successful deployment and ongoing management have traditionally required a deep pool of specialized expertise. With this product, we aim to bridge the gap between Kafka's potential and its practical utilization. By offering managed Event Streams for Apache Kafka service in the cloud, we empower businesses to focus on innovation and value creation rather than the intricacies of infrastructure management.
Event Streams for Apache Kafka
Explore the key use cases to implement Apache Kafka Streams.
Get started with creating and managing Apache Kafka Streams via the DCD.
Get started with creating and managing Apache Kafka Streams via the API.
In various industries, the IONOS Kafka service plays a pivotal role in enabling scalable and real-time data management solutions. Below, we explore two compelling use cases where organizations leverage IONOS Kafka to handle complex data challenges, achieve operational efficiency, and drive actionable insights.
Overview: A global e-commerce platform relies on our cloud-hosted Kafka service to manage and process real-time data streams efficiently. With millions of transactions occurring daily across various regions, the platform needs a robust solution to handle data ingestion, processing, and analysis in real-time.
Challenge: The e-commerce platform faces challenges in aggregating and processing a vast amount of real-time data from multiple sources, including customer interactions, inventory updates, and transaction logs. Traditional database systems struggle to handle the volume and velocity of incoming data, leading to latency issues and scalability limitations.
Solution: By leveraging the IONOS Events for Apache Kafka service, the platform establishes a scalable and fault-tolerant data pipeline. They deploy Kafka clusters in multiple regions to ensure low-latency data processing closer to their users. Producers within their ecosystem, such as mobile apps and web services, stream data into Kafka topics in real-time. Kafka's distributed architecture and partitioning capabilities enable parallel data processing, ensuring high throughput and low latency for consumers downstream.
Implementation:
Cluster Configuration: They opt for an XL-sized Kafka cluster with multiple nodes, high CPU, RAM, and storage capacity to handle peak data loads.
Stream Processing: Apache Kafka Streams API enables real-time processing and analytics directly within the Kafka ecosystem. They implement complex event processing (CEP) to derive actionable insights, such as personalized recommendations and fraud detection, in real-time.
Benefits:
Scalability: Kafka's horizontal scaling capabilities allow the platform to handle increasing data volumes and peak traffic periods without compromising performance.
Real-time Insights: By processing data in real-time, the platform gains actionable insights faster, enhancing customer experience and operational efficiency.
Reliability: IONOS Kafka's fault-tolerant architecture ensures data durability and continuous availability, reducing the risk of data loss or downtime.
Cost Efficiency: Optimized resource allocation and efficient data processing translate to cost savings compared to traditional data processing solutions.
Overview: A smart city initiative utilizes our IONOS Kafka service to manage and analyze IoT data generated by sensors deployed across the city. The initiative aims to improve urban planning, public safety, and resource management through data-driven insights and real-time monitoring.
Challenge: The smart city faces challenges in managing and processing vast amounts of real-time data generated by IoT devices, including traffic sensors, environmental monitors, and public safety cameras. They require a scalable and reliable solution to ingest, process, and analyze this diverse data in real-time to make informed decisions and respond to events promptly.
Solution: Our Kafka service provides a robust foundation for their IoT data management and analytics platform. They deploy Kafka clusters in a distributed architecture across the city's data centers and edge locations to ensure proximity to data sources and reduce latency. IoT devices stream data continuously into Kafka topics, where it is processed and analyzed in real-time to derive actionable insights.
Implementation:
Real-time Analytics: They leverage Kafka Streams and Apache Flink for stream processing and complex event processing (CEP) to detect anomalies, predict traffic patterns, and optimize resource allocation in real-time.
Integration with AI/ML: They integrate Kafka with AI/ML pipelines to perform predictive analytics and automate decision-making processes based on real-time insights.
Security and Compliance: Kafka's robust security features, including TLS encryption, authentication, and authorization mechanisms, ensure data confidentiality and compliance with regulatory requirements.
Benefits:
Operational Efficiency: Real-time data processing and analytics enable proactive management of city resources, improving efficiency and responsiveness to citizen needs.
Enhanced Safety: Real-time monitoring and predictive analytics help identify potential safety hazards, enabling quick response and mitigation measures.
Scalability: Kafka's horizontal scaling capabilities accommodate the growth of IoT devices and data volume, ensuring scalability without compromising performance.
Data-driven Decision Making: By harnessing real-time insights, the smart city makes data-driven decisions that optimize infrastructure usage and enhance quality of life for residents.
IONOS Event Streams for Apache Kafka offers a range of cluster sizes to meet the diverse needs of different applications, from development and testing environments to large-scale production deployments. Each cluster size is designed with specific hardware configurations to provide the right balance of performance, capacity, and cost-efficiency.
Node Count: 3
Cores per Node: 1
RAM per Kafka Broker: 2 GB
Storage per Broker: 195 GB
Total Storage: 585 GB
The XS cluster is ideal for development, testing, and small-scale applications that require a modest amount of resources. It provides sufficient capacity to handle light workloads while maintaining cost efficiency.
Node Count: 3
Cores per Node: 2
RAM per Kafka Broker: 4 GB
Storage per Broker: 250 GB
Total Storage: 750 GB
The S cluster is suitable for small to medium-sized applications that need moderate resources. It offers enhanced performance and storage capacity compared to the XS cluster, making it a good choice for applications with higher throughput and storage requirements.
Node Count: 3
Cores per Node: 2
RAM per Kafka Broker: 8 GB
Storage per Broker: 400 GB
Total Storage: 1200 GB
The M cluster is designed for medium-sized applications that demand higher performance and greater storage capacity. It provides a balanced configuration that can handle increased data volumes and more intensive processing tasks.
Node Count: 3
Cores per Node: 4
RAM per Kafka Broker: 16 GB
Storage per Broker: 800 GB
Total Storage: 2400 GB
The L cluster is well-suited for large applications with high throughput and substantial storage needs. With more cores and RAM per broker, this cluster size delivers superior performance and can support more demanding workloads.
Node Count: 3
Cores per Node: 8
RAM per Kafka Broker: 32 GB
Storage per Broker: 1500 GB
Total Storage: 4500 GB
The XL cluster is designed for enterprise-level applications and extremely high-throughput environments. It offers the highest performance and storage capacity, ensuring that even the most demanding applications can run smoothly and efficiently.
IONOS Event Streams for Apache Kafka allows you to choose the cluster size that best fits your current needs while providing the flexibility to scale as your requirements evolve. You can start with a smaller cluster and easily upgrade to a larger size as your data volumes and processing demands increase. This flexibility ensures that you can optimize costs while maintaining the ability to grow your Kafka deployment seamlessly.
To determine the best cluster size for your needs, consider your application’s data throughput, processing requirements, and storage demands. For more detailed guidance, please refer to our Cluster Sizing Guide or contact our support team for personalized assistance. By selecting the appropriate cluster size, you can ensure that your Kafka deployment is both cost-effective and capable of meeting your application’s performance requirements.
High availability is critical for applications that require uninterrupted data flow and processing. Our Kafka service is designed to deliver robust fault tolerance and automatic recovery mechanisms to keep your data pipelines resilient.
You can provision a cluster with multiple redundant nodes, ensuring that the failure of a single node does not impact the overall availability of the service. This redundancy is pivotal in maintaining data integrity and continuous service operation.
Our service includes automatic failover capabilities, which promptly redirect traffic to healthy nodes in the event of a failure. This mechanism minimizes downtime and ensures your applications remain unaffected by individual node outages.
You can set the replication factor for your Kafka topics to determine how many copies of each message are stored across different brokers. A higher replication factor enhances fault tolerance by ensuring that even if one or more brokers fail, your data remains available.
Our Kafka service provides extensive configuration options to fine-tune your deployment.
You can configure the number of partitions for each topic, allowing for parallel processing and increasing throughput. More partitions enable better load balancing across consumers, improving the overall performance and scalability of your Kafka cluster.
The retention time determines how long messages are retained in a topic before being discarded. You can adjust the retention time to suit your application's data lifecycle needs, ensuring that data is available for as long as necessary without overwhelming your storage capacity.
Along with retention time, you can set the retention size, which limits the total amount of data stored for a topic. Once the size limit is reached, older messages are purged to make room for new ones. This setting helps manage storage usage and costs effectively.
The Kafka Cluster Deletion feature allows users to remove a Kafka cluster from their environment. Follow the steps below to delete a Kafka cluster.
Warning:
Deleting a Kafka cluster is irreversible. Once a cluster is deleted, all the data and configurations contained within it will be permanently removed.
Ensure that you have backed up any necessary data before deleting the cluster.
Log in to the DCD with your username and password.
Go to Menu > Analytics > Event Streams for Apache Kafka.
Identify the Kafka cluster you wish to delete from the list. Ensure that you are selecting the correct cluster.
From the drop-down menu, click Delete.
A confirmation dialog will appear, asking you to confirm the deletion. Ensure that you understand this action is irreversible and will permanently delete the cluster and all its data. Confirm the deletion to proceed.
Result: The cluster will be permanently deleted.
Click located in the Options column for the desired Kafka cluster.
Learn how to set up a Kafka cluster.
Learn how to view the list of Kafka clusters.
Learn how to create a Kafka topic.
Learn how to delete a Kafka topic.
Learn how to delete a Kafka cluster.
The Kafka Topic Deletion feature allows users to remove a Kafka topic from the specified Kafka cluster. To delete a Kafka topic, follow these steps:
Warning:
Deleting a Kafka topic is irreversible. Once a topic is deleted, all the data contained within it will be permanently removed.
Ensure that you have backed up any necessary data before deleting the topic.
Go to Menu > Analytics > Event Streams for Apache Kafka.
Select the cluster you want to delete a topic from.
Within the cluster, navigate to the Topics tab to view the list of available Kafka topics.
In the topic list, identify the topic you wish to delete.
Select a topic for deletion.
Result: The topic will be permanently deleted.
Before setting up a Kafka Cluster, ensure that you are working within a provisioned Virtual Data Center (VDC) that contains at least one VM in a private LAN, which will access the cluster via the private LAN. The VM you create is counted against the quota allocated in your contract.
To create a Kafka cluster, follow these steps:
Log in to the DCD with your username and password.
Go to Menu > Analytics > Event Streams for Apache Kafka.
Click Create cluster.
Configure your cluster properties.
Cluster Name: Enter a unique name for your cluster. This name will help you identify the cluster in the dashboard.
Kafka Version: Select the Kafka version you wish to use. Currently, only version 3.7.0 is supported.
Cluster Size: Choose the appropriate size for your cluster:
XS (Extra Small): Suitable for small-scale testing and development environments.
S (Small): Ideal for slightly larger workloads with moderate throughput needs.
M (Medium): Designed for medium-sized workloads with significant data volume and consistent throughput requirements.
L (Large): Suitable for large-scale applications that require high throughput and robust fault tolerance.
XL (Extra Large): Tailored for enterprise-grade, high-performance environments with extremely high throughput and storage needs.
For more details on sizing, refer to the Cluster Sizes section.
Location: Select the region for your cluster. At present, only Germany / Berlin is available.
Choose a datacenter and a LAN.
Datacenter: Select the datacenter and region where you want to create your cluster. The region you choose determines the geographical placement of your Kafka cluster, affecting latency and compliance with local regulations.
Datacenter LAN: Your cluster will be connected to this network.
Configure the addresses for your Kafka brokers. These addresses will be used by clients to connect to the Kafka cluster. Ensure that the addresses are correctly configured to allow for seamless communication between the brokers and your clients.
The IP addresses you choose must belong to the same subnet as the LAN you selected in the previous step. This ensures proper network routing and communication within the cluster. To find out your subnet IP follow the instructions on the DCD screen.
Info: The Estimated costs will be displayed based on the input. It is an estimate and certain variables such as traffic are not considered.
Click Save to deploy your cluster.
Result: Your cluster is now being deployed. You can monitor its progress by returning to the Event Streams for Apache menu.
The Kafka Topic Creation screen allows users to create a new Kafka topic within a specified Kafka cluster. Below are the detailed steps and descriptions of the parameters users need to configure.
Prerequisite: A Kafka cluster must already be deployed and it must be in an Available state. Ensure that your Apache Kafka cluster is up and running and accessible from the Event Streams for Apache Kafka Clusters Overview window.
Log in to the DCD with your username and password.
Go to Menu > Analytics > Event Streams for Apache Kafka.
From the list of clusters, select the Kafka cluster where you want to create the new topic.
Click the Topics tab.
Click the Create topic button to open the topic creation dialog.
In the Create Topic dialog, configure your topic by providing the following values:
Name: Enter the name of the new Kafka topic.
Replication Factor: Specify the replication factor, which determines the number of copies of the data that will be maintained.
Number of Partitions: Define the number of partitions for the topic. Partitions determine how the data is distributed across the brokers.
Retention Time (ms): Set the retention time in milliseconds. This determines how long messages are retained in the topic. If set to -1
, no time limit is applied.
Retention Segment Size (B): Specify the retention segment size in bytes. This is the size at which log segments are rolled over.
Note: The value must be greater than or equal to 14 bytes.
Click Create to finalize the topic creation process.
Result: The Kafka cluster is successfully created.
With this endpoint you can retrieve a list of Kafka clusters based on specified filters.
GET /clusters
The GET /clusters
endpoint retrieves a collection of Kafka clusters based on specified filters. Use the filter.name
parameter to search for clusters containing a specific name (case insensitive). Use the filter.state
parameter to filter clusters based on their current state, such as AVAILABLE.
This endpoint provides essential information about each cluster, including its ID, metadata, properties, and connections. Use the returned data to manage and monitor Kafka clusters within your environment effectively.
Query Parameters | Required | Type | Description |
---|---|---|---|
Allows you to delete a Kafka cluster based on its ID.
DELETE /clusters/{clusterId}
The DELETE /clusters/{clusterId}
endpoint initiates the deletion of a Kafka cluster identified by its unique UUID (clusterId
). Upon successful deletion, the endpoint returns a 202 Accepted
status code, indicating that the cluster deletion process has been initiated.
This action permanently removes the specified Kafka cluster and all associated resources. Use caution when invoking this endpoint as it cannot be undone.
Use this endpoint to manage and decommission Kafka clusters within your environment, ensuring efficient resource utilization and lifecycle management.
Path Parameters | Required | Type | Description |
---|---|---|---|
202 Accepted: The request to delete the cluster was successful.
The Kafka Cluster List feature allows users to view the available Kafka clusters within their environment. Follow the steps below to view the Kafka clusters.
Log in to the DCD with your username and password.
Go to Menu > Analytics > Event Streams for Apache Kafka.
Result: You will be presented with a list of Kafka clusters. The list contains the following details:
— Name: The name of the Kafka cluster. — State: The current status of the Kafka cluster. For example, Available, Degraded, etc. — Location: The data center location where the Kafka cluster is deployed. — Size: The size of the Kafka cluster. — Version: The version of Kafka the cluster is running.
Info: Click on any cluster name in the list to view more detailed information about that specific Kafka cluster.
Info: Use the Options column that is represented by three vertical dots. Clicking this will open a dropdown menu with available management actions for the cluster, such as delete.
With this endpoint you can retrieve details of a specific Kafka topic within a specified Kafka cluster.
GET /clusters/{clusterId}/topics/{topicId}
The GET /clusters/{clusterId}/topics/{topicId}
endpoint retrieves detailed information about a specific Kafka topic identified by topicId within the Kafka cluster specified by clusterId. The response includes metadata such as creation and modification dates, ownership details, and current operational state. Additionally, topic properties such as name, replicationFactor, numberOfPartitions, and logRetention settings are provided.
Use this endpoint to fetch specific details of Kafka topics, facilitating effective monitoring and management of individual topics within your Kafka cluster.
Path Parameters | Required | Type | Description |
---|
This endpoint allows you to retrieve all users associated with a specified Kafka cluster, supporting pagination and optional filters.
GET /clusters/{clusterId}/users
The GET /clusters/{clusterId}/users
endpoint retrieves a collection of users associated with the Kafka cluster specified by the clusterId
. The response is a list of user objects containing one element, the admin user of the Kafka cluster. The Kafka cluster admin user has full administrative access and is created automatically when the cluster is created. The user object contains metadata such as creation and modification details, ownership information, and current operational state. Use this endpoint to manage and monitor users efficiently within your Kafka cluster.
Path Parameters | Required | Type | Description |
---|
You can create a new Kafka cluster with specified configurations.
POST /clusters
The POST /clusters
endpoint allows you to create a new Kafka cluster with specified properties. The name, version, size and connection fields are required. The response includes the ID, metadata, and properties of the newly created cluster, along with its current state and broker addresses.
Use this endpoint to provision a Kafka cluster tailored to your application's requirements, ensuring seamless integration and efficient data management.
Body Parameter | Required | Type | Description |
---|
This endpoint lets you fetch a list of all Kafka topics within a specified Kafka cluster.
GET /clusters/{clusterId}/topics
The GET /clusters/{clusterId}/topics
endpoint retrieves a collection of all Kafka topics within the specified Kafka cluster identified by clusterId. Each topic includes detailed metadata such as creation and modification dates, ownership details, and current operational state. Topic properties like name, replicationFactor, numberOfPartitions, and logRetention settings are also provided.
Use this endpoint to fetch and monitor all Kafka topics within your environment, enabling efficient management and monitoring of data streams and event processing.
Path Parameters | Required | Type | Description |
---|
Allows you to retrieve the details of a specific Kafka cluster based on its ID.
GET /clusters/{clusterId}
The GET /clusters/{clusterId}
endpoint retrieves detailed information about a specific Kafka cluster identified by its unique UUID (clusterId). This endpoint returns metadata including creation and modification dates, ownership details, and current operational state. The properties section provides specific details such as the cluster name, version, size, and connection information including broker addresses and network configurations.
Use this endpoint to fetch comprehensive details about a Kafka cluster within your environment, facilitating effective management and monitoring of Kafka resources.
Path Parameters | Required | Type | Description |
---|
Learn to create a Kafka cluster.
Learn to verify the status of a Kafka cluster.
Learn to get a list of all your Kafka clusters.
Learn to delete a Kafka cluster.
Learn to create new topics in your cluster.
Get a list of all topics in the cluster.
Get detailed information about a specific topic.
Get a list of the users in the cluster.
Learn to fetch credentials for your Kafka cluster.
Learn to configure access to your Kafka cluster.
filter.name
No
string
Only return Kafka clusters that contain the given name. This filter is case insensitive.
Example: filter.name=my-kafka-cluster
filter.state
No
string
Only return Kafka clusters with a given state.
Example: filter.state=AVAILABLE
clusterId
Yes
string
The UUID of the Kafka cluster to delete.
Example: /clusters/e69b22a5-8fee-56b1-b6fb-4a07e4205ead
| No | object | Optional metadata for the cluster. |
| Yes | object | Properties of the cluster to be created. |
| Yes | string | The name of the Kafka cluster. |
| Yes | string | The version of Kafka to use for the cluster. |
| Yes | string | The size of the Kafka cluster. |
| No | array | List of connections for the cluster. |
| Yes | string | The UUID of the data center where the cluster will be created. |
| Yes | string | The LAN ID where the cluster will be connected. |
| Yes | array | List of broker addresses for the cluster. |
| Yes | string | The UUID of the Kafka cluster where the topic belongs.
Example: |
| Yes | string | The UUID of the Kafka topic to retrieve details for.
Example: |
| Yes | string | The UUID of the Kafka cluster from which to retrieve users.
Example: |
| Yes | string | The UUID of the Kafka cluster from which to retrieve topics.
Example: |
| Yes | string | The UUID of the Kafka cluster to retrieve.
Example: |
The following information describes how to use credentials to configure access to the Kafka cluster.
Communication with your Kafka cluster is TLS secured, meaning both the client and the Kafka cluster authenticate each other. The client authenticates the server by verifying the server's certificate, and the server authenticates the client by verifying the client's certificate. As the Kafka cluster does not have publicly signed certificates, you must validate them with the cluster's certificate authority. Authentication happens via mutual TLS (mTLS). Therefore, your cluster maintains a client certificate authority to sign authenticated user certificates.
To connect and authenticate to your Kafka cluster, you must fetch the required two certificates and a key from the user's API endpoint. Below are the steps to get the required certificates and key with curl commands for a cluster created in Frankfurt (de-fra) region.
You will need different file formats for the certificates depending on the consumer/producer's implementation. The following sections show how to create and use them with the Kafka Command-Line Interface (CLI) Tools.
Your admin.properties files should look like this:
Your admin.properties
files should look similar to the following:
Your admin.properties
files should look similar to the following:
Allows you to create a new Kafka topic within a specified Kafka cluster.
POST /clusters/{clusterId}/topics
The POST /clusters/{clusterId}/topics
endpoint creates a new Kafka topic within the specified Kafka cluster (clusterId
). The request body must include the name of the topic, the other parameters are optional.
Upon successful creation, the endpoint returns detailed information about the newly created topic, including its ID (id), metadata, and properties. Use this endpoint to dynamically manage Kafka topics within your environment, ensuring efficient data distribution and retention policies.
Use this endpoint to dynamically manage Kafka topics within your environment, ensuring efficient data distribution and retention policies.
Path Parameters | Required | Type | Description |
---|---|---|---|
Body Parameters | Required | Type | Description |
---|---|---|---|
The endpoint retrieves the credentials of a specific user of the Kafka cluster. It includes relevant access certificates and a key found within the metadata.
GET /clusters/{clusterId}/users/{userId}/access
The GET /clusters/{clusterId}/users/{userId}/access
endpoint retrieves the credentials necessary to configure access to your Kafka cluster. The credentials belong to the Kafka administrator user, giving administrators access to the Kafka cluster. The response includes detailed metadata about the access credentials of the admin user, including creation and modification timestamps, ownership information, and current operational state. Access credentials including certificate authority, private key, and certificate are provided to facilitate secure communication with the Kafka cluster. Use this endpoint to manage and obtain detailed information about Kafka admin user credentials within your Kafka infrastructure.
Path Parameters | Required | Type | Description |
---|---|---|---|
clusterId
Yes
string
The UUID of the Kafka cluster where the topic will be created.
Example: /clusters/e69b22a5-8fee-56b1-b6fb-4a07e4205ead/topics
metadata
No
object
Optional metadata for the topic.
properties
Yes
object
Properties of the topic to be created.
properties.name
Yes
string
The name of the Kafka topic.
properties.replicationFactor
No
number
The number of replicas for the topic. This determines the fault tolerance.
properties.numberOfPartitions
No
number
The number of partitions for the topic. This affects the parallelism and throughput.
properties.logRetention
No
object
Configuration for log retention policies.
properties.logRetention.retentionTime
No
number
The retention time for logs in milliseconds. Defaults to 604800000 (7 days).
properties.logRetention.segmentBytes
No
number
The maximum size of a log segment in bytes before a new segment is rolled. Defaults to 1073741824 (1 GB).
clusterId
Yes
string
The UUID of the Kafka cluster used to retrieve user data.
Example: /clusters/e69b22a5-8fee-56b1-b6fb-4a07e4205ead/users
userId
Yes
string
The UUID of the user whose credentials and metadata are to be retrieved, in this case the admin user of Kafka cluster.
Example: d11db12c-2625-5664-afd4-a3599731b5af