IBM Streams Kafka integration

Overview

This module allows a Streams application to subscribe a Kafka topic as a stream and publish messages on a Kafka topic from a stream of tuples.

Connection to a Kafka broker

To bootstrap servers of the Kafka broker can be defined using a Streams application configuration or within the Python code by using a dictionary variable. The name of the application configuration or the dictionary must be specified using the kafka_properties parameter to subscribe() or publish(). The minimum set of properties in the application configuration or dictionary contains bootstrap.servers, for example

config value
bootstrap.servers host1:port1,host2:port2,host3:port3

Other configs for Kafka consumers or Kafka producers can be added to the application configuration or dictionary. When configurations are specified, which are specific for consumers or producers only, it is recommended to use different application configurations or variables of dict type for publish and subscribe.

The consumer and producer configs can be found in the Kafka documentation.

Please note, that the underlying SPL toolkit already adjusts several configurations. Please review the toolkit operator reference for defaults and adjusted configurations.

Simple connection parameter example:

import streamsx.kafka as kafka
from streamsx.topology.topology import Topology
from streamsx.topology.schema import CommonSchema

consumerProperties = {}
consumerProperties['bootstrap.servers'] = 'kafka-host1.domain:9092,kafka-host2.domain:9092'
consumerProperties['fetch.min.bytes'] = '1024'
consumerProperties['max.partition.fetch.bytes'] = '4194304'
topology = Topology()
kafka.subscribe(topology, 'Your_Topic', consumerProperties, CommonSchema.String)

When trusted certificates, or client certificates, and private keys are required to connect with a Kafka cluster, the function create_connection_properties helps to create stores for certificates and keys, and to create the right properties.

In IBM Cloud Pak for Data it is also possible to create application configurations for consumer and producer properties. An application configuration is a safe place to store sensitive data. Use the function configure_connection_from_properties to create an application configuration for kafka properties.

Example with use of an application configuration:

from icpd_core import icpd_util

from streamsx.topology.topology import Topology
from streamsx.topology.schema import CommonSchema
from streamsx.rest import Instance
import streamsx.topology.context

import streamsx.kafka as kafka

topology = Topology('ConsumeFromKafka')

connection_properties = kafka.create_connection_properties(
    bootstrap_servers='kafka-bootstrap.192.168.42.183.nip.io:443',
    #use_TLS=True,
    #enable_hostname_verification=True,
    cluster_ca_cert='/tmp/secrets/cluster_ca_cert.pem',
    authentication = kafka.AuthMethod.SCRAM_SHA_512,
    username = 'user123',
    password = 'passw0rd', # not the very best choice for a password
    topology = topology)

consumer_properties = dict()
# In this example we read only transactionally committed messages
consumer_properties['isolation.level'] = 'read_committed'
# add connection specifc properties to the consumer properties
consumer_properties.update(connection_properties)
# get the streams instance in IBM Cloud Pak for Data
instance_cfg = icpd_util.get_service_instance_details(name='instanceName')
instance_cfg[streamsx.topology.context.ConfigParams.SSL_VERIFY] = False
streams_instance = Instance.of_service(instance_cfg)

# create the application configuration
appconfig_name = configure_connection_from_properties(
    instance=streams_instance,
    name='kafkaConsumerProps',
    properties=consumer_properties,
    description='Consumer properties for authenticated access')

messages = kafka.subscribe (topology, 'mytopic', appconfig_name, CommonSchema.String)

Messages

The schema of the stream defines how messages are handled.

  • CommonSchema.String - Each message is a UTF-8 encoded string.
  • CommonSchema.Json - Each message is a UTF-8 encoded serialized JSON object.
  • StringMessage - structured schema with message and key
  • BinaryMessage - structured schema with message and key
  • StringMessageMeta - structured schema with message, key, and message meta data
  • BinaryMessageMeta - structured schema with message, key, and message meta data

No other formats are supported.

Sample

A simple hello world example of a Streams application publishing to a topic and the same application consuming the same topic:

from streamsx.topology.topology import Topology
from streamsx.topology.schema import CommonSchema
from streamsx.topology.context import submit, ContextTypes
import streamsx.kafka as kafka
import time

def delay(v):
    time.sleep(5.0)
    return True

topology = Topology('KafkaHelloWorld')

to_kafka = topology.source(['Hello', 'World!'])
to_kafka = to_kafka.as_string()
# delay tuple by tuple
to_kafka = to_kafka.filter(delay)

# Publish a stream to Kafka using TEST topic, the Kafka servers
# (bootstrap.servers) are configured in the application configuration 'kafka_props'.
kafka.publish(to_kafka, 'TEST', 'kafka_props')

# Subscribe to same topic as a stream
from_kafka = kafka.subscribe(topology, 'TEST', 'kafka_props', CommonSchema.String)

# You'll find the Hello World! in stdout log file:
from_kafka.print()

submit(ContextTypes.DISTRIBUTED, topology)
# The Streams job is kept running.
class streamsx.kafka.AuthMethod

Defines authentication methods for Kafka.

NONE = 0

No authentication

New in version 1.3.

PLAIN = 2

PLAIN, or SASL/PLAIN, is a simple username/password authentication mechanism that is typically used with TLS for encryption to implement secure authentication. SASL/PLAIN should only be used with SSL as transport layer to ensure that clear passwords are not transmitted on the wire without encryption.

New in version 1.3.

SCRAM_SHA_512 = 3

Authentication with SASL/SCRAM-SHA-512 method. A username and a password is required.

New in version 1.3.

TLS = 1

Mutual TLS authentication with X.509 certificates. Client certificate and client private key is required

New in version 1.3.

streamsx.kafka.download_toolkit(url=None, target_dir=None)

Downloads the latest Kafka toolkit from GitHub.

Example for updating the Kafka toolkit for your topology with the latest toolkit from GitHub:

import streamsx.kafka as kafka
# download Kafka toolkit from GitHub
kafka_toolkit_location = kafka.download_toolkit()
# add the toolkit to topology
streamsx.spl.toolkit.add_toolkit(topology, kafka_toolkit_location)

Example for updating the topology with a specific version of the Kafka toolkit using a URL:

import streamsx.kafka as kafka
url201 = 'https://github.com/IBMStreams/streamsx.kafka/releases/download/v2.0.1/com.ibm.streamsx.kafka-2.0.1.tgz'
kafka_toolkit_location = kafka.download_toolkit(url=url201)
streamsx.spl.toolkit.add_toolkit(topology, kafka_toolkit_location)
Parameters:
  • url (str) – Link to toolkit archive (*.tgz) to be downloaded. Use this parameter to download a specific version of the toolkit.
  • target_dir (str) – the directory where the toolkit is unpacked to. If a relative path is given, the path is appended to the system temporary directory, for example to /tmp on Unix/Linux systems. If target_dir is None a location relative to the system temporary directory is chosen.
Returns:

the location of the downloaded Kafka toolkit

Return type:

str

Note

This function requires an outgoing Internet connection

New in version 1.3.

streamsx.kafka.create_connection_properties(bootstrap_servers, use_TLS=True, enable_hostname_verification=True, cluster_ca_cert=None, authentication=<AuthMethod.NONE: 0>, client_cert=None, client_private_key=None, username=None, password=None, topology=None)

Create Kafka properties that can be used to connect a consumer or a producer with a Kafka cluster when certificates and keys or authentication is required. The resulting properties can be used for example in configure_connection_from_properties(), subscribe(), or publish().

When certificates are given, the function will create a truststore and/or a keystore, which are added as file dependencies to the topology, which must not be None in this case.

Certificates and keys are given as strings. The arguments can be the name of an existinig PEM formatted file, which the content is read, or the PEM formatted certificate or key directly. The PEM format is a text format with base64 encoded content between anchors:

-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----

or:

-----BEGIN PRIVATE KEY-----
...
-----END PRIVATE KEY-----

Example, in which the brokers are configured for SCRAM-SHA-512 authentication over a TLS connection, and where the server certificates are not signed by a public CA. In the example, the CA certificate for the cluster is stored in the file /tmp/secrets/cluster_ca.crt:

from streamsx.topology.topology import Topology
from streamsx.topology.schema import CommonSchema
import streamsx.kafka as kafka

consumerTopology = Topology('ConsumeFromKafka')

consumerProperties = dict()

# In this example we read only transactionally committed messages
consumerProperties['isolation.level'] = 'read_committed'
connectionProps = kafka.create_connection_properties(
    bootstrap_servers = 'kafka-cluster1-kafka-bootstrap-myproject.192.168.42.183.nip.io:443',
    #use_TLS = True,
    #enable_hostname_verification = True,
    cluster_ca_cert = '/tmp/secrets/cluster_ca.crt',
    authentication = kafka.AuthMethod.SCRAM_SHA_512,
    username = 'user123',
    password = 'passw0rd', # not the very best choice for a password
    topology = consumerTopology)

# add connection specifc properties to the consumer properties
consumerProperties.update(connectionProps)
messages = kafka.subscribe (consumerTopology, 'mytopic', consumerProperties, CommonSchema.String)
Parameters:
  • bootstrap_servers (str) – The bootstrap address of the Kafka cluster. This is a single hostname:TCPport pair or a comma separated List of hostname:TCPport, for example 'server1:9093', or 'server1:9093,server2:9093,server3:9093'.
  • use_TLS (bool) – When True (default), the client connects via encrypted connections with the Kafka brokers. In this case it may also be necessary to provide the CA certificate of the cluster within the cluster_ca_cert parameter. When the parameter is False, the traffic to the Kafka brokers is not encrypted.
  • enable_hostname_verification (bool) –

    When True (default), the hostname verification of the presented server certificate is enabled. For example, some methods to expose a containerized Kafka cluster do not support hostname verification. In these cases hostname verification must be disabled.

    The parameter is ignored when use_TLS is False.

  • cluster_ca_cert (str|list) –

    The CA certificate of the broker certificates or a list of them. This certificate is required when the cluster does not use certificates signed by a public CA authority. The parameter must be the name of an existing PEM formatted file or the PEM formatted certificate itself, or a list of filenames or PEM strings. These certificates are treated as trusted and go into a truststore.

    A trusted certificate must have a text format like this:

    -----BEGIN CERTIFICATE-----
    ...
    -----END CERTIFICATE-----
    

    The parameter is ignored when use_TLS is False.

  • authentication (AuthMethod) –

    The authentication method used by the brokers.

    • None, AuthMethod.NONE: clients are not authenticated. The parameters client_cert, client_private_key, username, and password are ignored.
    • AuthMethod.PLAIN: PLAIN, or SASL/PLAIN, is a simple username/password authentication mechanism that is typically used with TLS for encryption to implement secure authentication. SASL/PLAIN should only be used with SSL as transport layer to ensure that clear passwords are not transmitted on the wire without encryption.
    • AuthMethod.TLS: clients are authorized with client certificates. This authentication method can only be used when the client uses a TLS connection, i.e. when use_TLS is True. The client certificate must be trusted by the server, and must be and given as the client_cert parameter together with the corresponding private key as the client_private_key parameter.
    • AuthMethod.SCRAM_SHA_512: SCRAM (Salted Challenge Response Authentication Mechanism) is an authentication protocol that can establish authentication using usernames and passwords. It can be used with or without a TLS client connection. This authentication method requires that the parameters username and password are used.
  • client_cert (str) –

    The client certificate, i.e. the public part of a key pair signed by an authority that is trusted by the brokers. The parameter must be the name of an existing PEM formatted file or the PEM formatted certificate itself. The client certificate must have a text format like this:

    -----BEGIN CERTIFICATE-----
    ...
    -----END CERTIFICATE-----
    

    The parameter is ignored when authentication is not ‘TLS’.

  • client_private_key (str) –

    The private part of the RSA key pair on which the client certificate is based. The parameter must be the name of an existing PEM formatted file or the PEM formatted key itself. The private key must have a text format like this:

    -----BEGIN PRIVATE KEY-----
    ...
    -----END PRIVATE KEY-----
    

    The parameter is ignored when authentication is not ‘TLS’.

  • username (str) – The username for SCRAM authentication. The parameter is ignored when authentication is not ‘SCRAM-SHA-512’.
  • password (str) – The password for SCRAM authentication. The parameter is ignored when authentication is not ‘SCRAM-SHA-512’.
  • topology (Topology) – The topology to which a truststore and or keystore as file dependencies are added. It must be this Topology instance, which the created Kafka properties are used. The parameter must not be None when one of the parameters cluster_ca_cert, client_cert, or client_private_key is not None.
Returns:

Kafka properties

Return type:

dict

Note

When certificates are needed, this function can be used only with the streamsx.kafka toolkit version 1.9.2 and higher. The function will add a toolkit dependency to the topology. When the toolkit dependency cannot be satisfied, use a newer toolkit version. A newer version of the toolkit can be downloaded from GitHub with download_toolkit().

Warning

The returned properties can contain sensitive data. Storing the properties in an application configuration is a good idea to avoid exposing sensitive information. On IBM Cloud Pak for Data use configure_connection_from_properties() to do this.

New in version 1.3.

streamsx.kafka.configure_connection(instance, name, bootstrap_servers, ssl_protocol=None, enable_hostname_verification=True)

Configures IBM Streams for a connection with a Kafka broker.

Creates an application configuration object containing the required properties with connection information. The application configuration contains following properties:

  • bootstrap.servers
  • security.protocol (when ssl_protocol is not None)
  • ssl.protocol (when ssl_protocol is not None)
  • ssl.endpoint.identification.algorithm (when enable_hostname_verification is False)

Example for creating a configuration for a Streams instance with connection details:

from streamsx.rest import Instance
import streamsx.topology.context
from icpd_core import icpd_util

cfg = icpd_util.get_service_instance_details(name='your-streams-instance')
cfg[streamsx.topology.context.ConfigParams.SSL_VERIFY] = False
instance = Instance.of_service(cfg)
bootstrap_servers = 'kafka-server-1.domain:9093,kafka-server-2.domain:9093,kafka-server-3.domain:9093'
app_cfg_name = configure_connection(instance, 'my_app_cfg1', bootstrap_servers, 'TLSv1.2')
Parameters:
  • instance (streamsx.rest_primitives.Instance) – IBM Streams instance object.
  • name (str) – Name of the application configuration.
  • bootstrap_servers (str) – Comma separated List of hostname:TCPport of the Kafka-bootstrap-servers, for example 'server1:9093', or 'server1:9093,server2:9093,server3:9093'.
  • ssl_protocol (str) – One of None, ‘TLS’, ‘TLSv1’, ‘TLSv1.1’, or ‘TLSv1.2’. If None is used, TLS is not configured. If unsure, use ‘TLS’, which is Kafka’s default.
  • enable_hostname_verification (bool) – True (default) enables hostname verification of the server certificate, False disables hostname verification. The parameter is ignored, when ssl_protocol is None.
Returns:

Name of the application configuration, i.e. the same value as given in the name parameter

Return type:

str

Warning

The function can be used only in IBM Cloud Pak for Data

New in version 1.1.

streamsx.kafka.configure_connection_from_properties(instance, name, properties, description=None)

Configures IBM Streams for a connection with a Kafka broker.

Creates an application configuration object containing the required properties with connection information. The application configuration contains the properties given as key-value pairs in the properties dictionary. The keys must be valid consumer or producer configurations.

Example for creating a configuration for a Streams instance with connection details:

from streamsx.rest import Instance
import streamsx.topology.context
from streamsx.kafka import create_connection_properties, configure_connection
from icpd_core import icpd_util

cfg = icpd_util.get_service_instance_details(name='your-streams-instance')
cfg[streamsx.topology.context.ConfigParams.SSL_VERIFY] = False
instance = Instance.of_service(cfg)
bootstrap_servers = 'kafka-server-1.domain:9093,kafka-server-2.domain:9093,kafka-server-3.domain:9093'
consumer_properties = create_connection_properties (bootstrap_servers = bootstrap_servers)
app_cfg_name = configure_connection_from_properties(instance, 'my_app_cfg1', consumer_properties, 'KafkaConsumerConfig')
Parameters:
  • instance (streamsx.rest_primitives.Instance) – IBM Streams instance object.
  • name (str) – Name of the application configuration.
  • properties (dict) – Properties containing valid consumer or producer configurations.
  • description (str) – Description of the application configuration. If no descrition is given, a description is generated.
Returns:

Name of the application configuration, i.e. the same value as given in the name parameter

Return type:

str

Warning

The function can be used only in IBM Cloud Pak for Data

New in version 1.3.

streamsx.kafka.publish(stream, topic, kafka_properties, name=None)

Publish messages to a topic in a Kafka broker.

Adds a Kafka producer where each tuple on stream is published as a stream message.

Parameters:
  • stream (Stream) – Stream of tuples to be published as messages.
  • topic (str) – Topic to publish messages to.
  • kafka_properties (dict|str) – Properties containing the producer configurations, at minimum the bootstrap.servers property. When a string is given, it is the name of the application configuration, which contains producer configs. Must not be None.
  • name (str) – Producer name in the Streams context, defaults to a generated name.
Returns:

Stream termination.

Return type:

streamsx.topology.topology.Sink

streamsx.kafka.subscribe(topology, topic, kafka_properties, schema, group=None, name=None)

Subscribe to messages from a Kafka broker for a single topic.

Adds a Kafka consumer that subscribes to a topic and converts each message to a stream tuple.

Parameters:
  • topology (Topology) – Topology that will contain the stream of messages.
  • topic (str) – Topic to subscribe messages from.
  • kafka_properties (dict|str) – Properties containing the consumer configurations, at minimum the bootstrap.servers property. When a string is given, it is the name of the application configuration, which contains consumer configs. Must not be None.
  • schema (StreamSchema) – Schema for returned stream.
  • group (str) – Kafka consumer group identifier. When not specified it default to the job name with topic appended separated by an underscore.
  • name (str) – Consumer name in the Streams context, defaults to a generated name.
Returns:

Stream containing messages.

Return type:

streamsx.topology.topology.Stream

Schemas for streams created with the subscribe() method, and usable for streams terminated with the publish(). All of these message types are keyed messages.

class streamsx.kafka.schema.Schema

Structured stream schemas for keyed messages for subscribe(), and for streams that are published by publish() to an Event Streams topic.

The schemas

have the attributes message, and key. They vary in the type for the message attribute and can be used for subscribe() and for the stream published with publish().

The schemas

have the attributes message, key, topic, partition, offset, and messageTimestamp. They vary in the type for the message attribute and can be used for subscribe() and publish().

All schemas defined in this class are instances of streamsx.topology.schema.StreamSchema.

The following sample uses structured schemas for publishing messages with keys to a possibly partitioned topic in a Kafka broker. Then, it creates a consumer group that subscribes to the topic, and processes the received messages in parallel channels partitioned by the message key:

from streamsx.topology.topology import Topology
from streamsx.topology.context import submit, ContextTypes
from streamsx.topology.topology import Routing
from streamsx.topology.schema import StreamSchema
from streamsx.kafka.schema import Schema
import streamsx.kafka as kafka

import random
import time
import json
from datetime import datetime


# Define a callable source for data that we push into Event Streams
class SensorReadingsSource(object):
    def __call__(self):
        # This is just an example of using generated data,
        # Here you could connect to db
        # generate data
        # connect to data set
        # open file
        i = 0
        # wait that the consumer is ready before we start creating data
        time.sleep(20.0)
        while(i < 100000):
            time.sleep(0.001)
            i = i + 1
            sensor_id = random.randint(1, 100)
            reading = {}
            reading["sensor_id"] = "sensor_" + str(sensor_id)
            reading["value"] = random.random() * 3000
            reading["ts"] = int(datetime.now().timestamp())
            yield reading


# parses the JSON in the message and adds the attributes to a tuple
def flat_message_json(tuple):
    messageAsDict = json.loads(tuple['message'])
    tuple.update(messageAsDict)
    return tuple


# calculate a hash code of a string in a consistent way
# needed for partitioned parallel streams
def string_hashcode(s):
    h = 0
    for c in s:
        h = (31 * h + ord(c)) & 0xFFFFFFFF
    return ((h + 0x80000000) & 0xFFFFFFFF) - 0x80000000


topology = Topology('KafkaGroupParallel')

#
# the producer part
#
# create the data and map them to the attributes 'message' and 'key' of the
# 'Schema.StringMessage' schema for Kafka, so that we have messages with keys
sensorStream = topology.source(
    SensorReadingsSource(),
    "RawDataSource"
    ).map(
        func=lambda reading: {'message': json.dumps(reading),
                              'key': reading['sensor_id']},
        name="ToKeyedMessage",
        schema=Schema.StringMessage)
# assume, we are running a Kafka broker at localhost:9092
producer_configs = dict()
producer_configs['bootstrap.servers'] = 'localhost:9092'
kafkaSink = kafka.publish(
    sensorStream,
    topic="ThreePartitions",
    kafka_properties=producer_configs,
    name="SensorPublish")


#
# the consumer side
#
# subscribe, create a consumer group with 3 consumers
consumer_configs = dict()
consumer_configs['bootstrap.servers'] = 'localhost:9092'

consumerSchema = Schema.StringMessageMeta
received = kafka.subscribe(
    topology,
    topic="ThreePartitions",
    schema=consumerSchema,
    group='my_consumer_group',
    kafka_properties=consumer_configs,
    name="SensorSubscribe"
    ).set_parallel(3).end_parallel()

# start a different parallel region partitioned by message key,
# so that each key always goes into the same parallel channel
receivedParallelPartitioned = received.parallel(
    5,
    routing=Routing.HASH_PARTITIONED,
    func=lambda x: string_hashcode(x['key']))

# schema extension, here we use the Python 2.7, 3 way
flattenedSchema = consumerSchema.extend(
    StreamSchema('tuple<rstring sensor_id, float64 value, int64 ts>'))

receivedParallelPartitionedFlattened = receivedParallelPartitioned.map(
    func=flat_message_json,
    name='JSON2Attributes',
    schema=flattenedSchema)

# validate by remove negativ and zero values from the streams,
# pass only positive vaues and timestamps
receivedValidated = receivedParallelPartitionedFlattened.filter(
    lambda tup: (tup['value'] > 0) and (tup['ts'] > 0),
    name='Validate')

# end parallel processing and print the merged result stream to stdout log
receivedValidated.end_parallel().print()

submit(ContextTypes.DISTRIBUTED, topology)
BinaryMessage = <streamsx.topology.schema.StreamSchema object>

Stream schema with message and key, where the message is a binary object (sequence of bytes), and the key is a string.

The schema defines following attributes

  • message(bytes) - the message content
  • key(str) - the key for partitioning

This schema can be used for both subscribe(), and for streams that are published by publish().

New in version 1.2.

BinaryMessageMeta = <streamsx.topology.schema.StreamSchema object>

Stream schema with message, key, and message meta data, where the message is a binary object (sequence of bytes), and the key is a string. This schema can be used for subscribe().

The schema defines following attributes

  • message(bytes) - the message content
  • key(str) - the key for partitioning
  • topic(str) - the Event Streams topic
  • partition(int) - the topic partition number (32 bit)
  • offset(int) - the offset of the message within the topic partition (64 bit)
  • messageTimestamp(int) - the message timestamp in milliseconds since epoch (64 bit)

New in version 1.2.

StringMessage = <streamsx.topology.schema.StreamSchema object>

Stream schema with message and key, both being strings.

The schema defines following attributes

  • message(str) - the message content
  • key(str) - the key for partitioning

This schema can be used for both subscribe(), and for streams that are published by publish().

New in version 1.2.

StringMessageMeta = <streamsx.topology.schema.StreamSchema object>

Stream schema with message, key, and message meta data, where both message and key are strings. This schema can be used for subscribe().

The schema defines following attributes

  • message(str) - the message content
  • key(str) - the key for partitioning
  • topic(str) - the Event Streams topic
  • partition(int) - the topic partition number (32 bit)
  • offset(int) - the offset of the message within the topic partition (64 bit)
  • messageTimestamp(int) - the message timestamp in milliseconds since epoch (64 bit)

New in version 1.2.

Indices and tables