Using Confluent’s JDBC Connector without installing the entire platform

转自:https://prefrontaldump.wordpress.com/2016/05/02/using-confluents-jdbc-connector-without-installing-the-entire-platform/

I was interested in trying out Confluent’s JDBC connector without installing their entire platform (I’d like to stick to vanilla Kafka as much as possible).  Here are the steps I followed to get it working with SQL Server.

Download Kafka 0.9, untar the archive, and create a directory named connect_libs in the kafka root (kafka_2.10-0.9.0.1/connect_libs).

Download the Confluent platform and extract the following jars (you should also be able to pull these from Confluent’s Maven repo, though I was unsuccessful):

  • common-config-2.0.1.jar
  • common-metrics-2.0.1.jar
  • common-utils-2.0.1.jar
  • kafka-connect-jdbc-2.0.1.jar

*Place these jars along with the SQL Server driver in kafka_2.10-0.9.0.1/connect_libs. Update bootstrap.servers in kafka_2.10-0.9.0.1/config/connect-standalone.properties with the broker list and create kafka_2.10-0.9.0.1/config/connect-jdbc.properties with the settings to try out:

name=sqlserver-feed
connector.class=io.confluent.connect.jdbc.JdbcSourceConnector
tasks.max=1

connection.url=jdbc:sqlserver://xxx.xxx.xxx.xxx:1433;databaseName=FeedDB;user=user;password=password
table.whitelist=tblFeedIP,tblFeedURL

mode=timestamp+incrementing
timestamp.column.name=localLastUpdated
incrementing.column.name=id

topic.prefix=stg-

Create the topics stg-tblFeedIP and stg-tblFeedURL on the cluster.

Add the connect_libs directory to the classpath:
export CLASSPATH="connect_libs/*"

And finally, run the connector in standalone mode with (make sure you are in the root kafka directory): bin/connect-standalone.sh config/connect-standalone.properties config/connect-jdbc.properties

Then, tail your topics to verify that messages are being produced by the connector.

* If you don’t care about cluttering up the default libs directory (kafka_2.10-0.9.0.1/libs), you can also just dump the jars there and not have to worry about setting the classpath.

时间: 2024-07-29 23:54:57

Using Confluent’s JDBC Connector without installing the entire platform的相关文章

Datastage JDBC Connector 中文乱码处理

在Datastage中,通常处理中文字符编码的时候是通过设置工程.JOB.stage三个级别的NLS 但JDBC Connector stage这个组件并没有NLS选项,而是通过 stage里面的"Properties"选项卡里面的 "Session"-->"Character set for non-Unicode columns" --> "Character set name *"进行设置 默认情况下&quo

Build an ETL Pipeline With Kafka Connect via JDBC Connectors

This article is an in-depth tutorial for using Kafka to move data from PostgreSQL to Hadoop HDFS via JDBC connections. Read this eGuide to discover the fundamental differences between iPaaS and dPaaS and how the innovative approach of dPaaS gets to t

JDBC访问及操作SQLite数据库

SQLite 是一个开源的嵌入式关系数据库,其特点是高度便携.使用方便.结构紧凑.高效.可靠. 与其他数据库管理系统不同,SQLite 的安装和运行非常简单,在大多数情况下,只要确保SQLite的二进制文件存在即可开始创建.连接和使用数据库.  SQLite的下载页面:http://www.sqlite.org/download.html window操作系统下载:sqlite-dll-win32-x86-3081002.zip及sqlite-shell-win32-x86-3081002.zi

Database Connector

The Database connector allows you to connect with almost any Java Database Connectivity (JDBC) relational database using a single interface for every case. The Database connector allows you to run diverse(adj. 不同的:多种多样的:变化多的) SQL operations on your d

不知道数据库中表的列类型的前提下,使用JDBC正确的取出数据(转)

概要: 使用jdbc 如果在不知道表结构的情况下,如何读出表信息? 使用ResultSetMetaData; 然后使用getColumnType 获取column 类型 使用getColumnName 获取column名字 根据类型,使用ResultSet 的getInt("column1")....获取每个字段的值 本文使用 Vector 做为容器,把拿到的查询结果,临时放在容器内. 1.   数据库准备 a. create database study; b. create tab

IceScrum敏捷开发工具的安装文档-官方最新版

Welcome to the iceScrum iceScrum install guide. If you don’t want to manage your own iceScrum installation, you may want to consider our Cloud offers (you can try iceScrum Cloud for free!) Introduction If you are new to iceScrum, you may want to use

kafka conet

指令: export JAVA_HOME=/usr/java/jdk1.8.0_172-amd64 export PATH=\(JAVA_HOME/bin:\)PATH export CLASSPATH=.:\(JAVA_HOME/lib/*:\)JAVA_HOME/jre/lib/:/opt/java/lib/ bin/kafka-server-start.sh config/server.properties /opt/kafka/bin/connect-standalone.sh /kaf

Kafka实战解惑

一.Kafka简介 Kafka是LinkedIn使用Scala开发的一个分布式消息中间件,它以水平扩展能力和高吞吐率著称,被广泛用于日志处理.ETL等应用场景.Kafka具有以下主要特点: 消息的发布.订阅均具有高吞吐量:据统计数字表明,Kafka每秒可以生产约25万消息(50 MB),每秒处理55万消息(110 MB). 消息可持久化:消息可持久化到磁盘并且通过Replication机制防止数据丢失. 分布式系统,可水平扩展:所有的生产者(Producer).消费者(Consumer).消息中

Kafka Connect REST Interface

Since Kafka Connect is intended to be run as a service, it also supports a REST API for managing connectors. By default this service runs on port 8083. When executed in distributed mode, the REST API will be the primary interface to the cluster. You