netty拆包/粘包的解决方案

netty拆包/粘包的解决方案

刚开始学拆包/粘包的时候确实不太好理解,我反复看了几遍就理解了。写下了加深记忆,也希望对大家有所帮助。

本章只介绍简单的二个工具LineBaseFrameDecoder和StringDecoder.

基础知识

1.首先图解拆包/粘包问题出现的原因

假设现在客户端向服务器端发送数据,在某一时间段上客户端分别发送了D1和D2二个数据包给服务器,由于服务器一次读取到的字节数是不确定的,故存在以下5种情况。

情况1:服务器分二次读取到了二个独立的数据包,分别是D1和D2,并没有粘包和拆包,如下图:

情况2:服务器一次接收到二个数据包,D1和D2粘合在一起,被成为TCP粘包。如下图:

情况3:服务器分二次读取到了二个数据包,第一次读取到了完整的D1包和D2包的一部分,第二次读取到了D2包的剩余部分,这被成为TCP拆包。如下图:

情况4:服务器分二次读取到了二个数据包,第一次读取到了D1包的部分内容 ,第二次读取到了D1包剩余部分和完整的D2包,这被成为TCP拆包。如下图:

情况5:如果服务器的TCP接收滑窗非常小,而数据包D1和D2比较大,很可能发生服务器多次才能将D1和D2接收完成,期间发生多次拆包。

2.粘包问题的解决策略

由于底层无法理解上层的业务数据,所以在底层是无法保证数据包不被拆分和重组的,这个问题只能通过上层的应用协议栈设计来解决,根据业界的主流协议的解决规范,可以归纳如下方法:

1.消息定长(如:每个报文大小固定200个字节,如果不够空位补空格)

2.在包尾增加回车换行符进行分割(如:FTP协议)。

3.将消息分为消息头和消息体,消息头中包含表示消息总长度的字段,通常设计思路为消息头的第一个字段使用int32来表示消息的总长度。

4.更复杂的应用层协议。

介绍完了TCP粘包/拆包的基础知识,下面我们通个一个例子来看看如何使用Netty提供半包解码器来解决拆包/粘包问题

案例讲解(代码在最后有下载链接)

下面我们先通过一个未考虑粘包导致的异常情况案例

未考虑粘包导致的异常情况

服务器端重点代码

public class TimeServerHandler extends SimpleChannelInboundHandler<Object> {
private int counter;
    @Override
    protected void channelRead0(ChannelHandlerContext ctx, Object msg)
            throws Exception {
            ByteBuf buf = (ByteBuf) msg;
            byte[] req = new byte[buf.readableBytes()];
            buf.readBytes(req);
            String body = new String(req,"UTF-8").substring(0,req.length-System.getProperty("line.separator").length());
        /**
         * 收到消息后,++counter记录消息数,然后发应答消息给客户端,
         * 按照我们的设计我们期望能收到100条数据(因为客户端发了100条),但是只收到以2条消息(第1条包含57条消息,第2条包含43条消息,加起来正好100条)这说明发生的粘包
         * 当发生的粘包,我们的应用就不能正常工作了
         */
            System.out.println("The Time Server receive order:"+body+"; the counter is:"+ (++counter));

            String currentTime = "Query Time Order".equalsIgnoreCase(body)?new Date(System.currentTimeMillis()).toString():"Bad Order";
            currentTime = currentTime+System.getProperty("line.separator");
            ByteBuf resp  = Unpooled.copiedBuffer(currentTime.getBytes());
            ctx.write(resp);
    }
    @Override
    public void channelReadComplete(ChannelHandlerContext ctx) throws Exception {
        ctx.flush();
    }
    @Override
    public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause)
            throws Exception {
        ctx.close();
    }

}

服务器端代码讲解:每收到一条消息后,counter++记录消息数,发应答消息给客户端。按照设计,服务器端接受到的消息总数应该跟客户端发送的消息总数相同,而且请求消息删除回车换行符后应该为”Query Time Order”。下面看看客户端代码

客户端代码

public class TimeClientHandler extends SimpleChannelInboundHandler {
    private static final Logger logger = Logger.getLogger(TimeClientHandler.class.getName());
    private int counter;
    private byte[] req;

    public TimeClientHandler(){
        //这里介绍一下System.getProperty("line.separator") // 直线分隔符
         req  =  ("Query Time Order"+System.getProperty("line.separator")).getBytes();
    }

    @Override
    public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause)
            throws Exception {
        logger.warning("Unexpected exception from downstream:"+cause.getMessage());
        ctx.close();//释放资源
    }

    @Override
    public void channelActive(ChannelHandlerContext ctx) throws Exception {
        ByteBuf message = null;
        //客户端发送了100次数据,理论上服务器端应该收到100条数据。但实际上服务器只收到2条,很明显发生了粘包。
        for(int i=0;i<100;i++){
            message = Unpooled.buffer(req.length);
            message.writeBytes(req);
            ctx.writeAndFlush(message);
        }
    }

    @Override
    protected void channelRead0(ChannelHandlerContext ctx, Object msg)
            throws Exception {
        /**
         *  客户端会记录服务器发过来的消息数量,我们预期应改收到100条数据。
         *  但是实际上客户端只收到1条数据,这很正常,因为我们的服务器端只返回了2条数据,
         *  只所以客户端只收到1条数据,是因为服务器发过来的2条数据被粘包了。
         */

        ByteBuf buf = (ByteBuf) msg;
        byte[] req = new byte[buf.readableBytes()];
        buf.readBytes(req);
        String body = new String(req,"UTF-8");
        System.out.println("Now is:"+body+"; the counter is:"+(++counter));
    }

}

代码讲解:

客户端跟服务器连接成功后,会在channelActive的方法中循环发送100条数据,每发送一条就刷新一次,保证每条数据都写入channel中,按照我们的设计服务器端应该收到100条数据。

channelRead0中每收到一条服务器的应答消息后,就++counter记录消息数,按照设计我我们应该打印100条应答消息。

下面看看我们的运行结果:

服务器端运行结果

The Time Server receive order:Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Ord; the counter is:1
The Time Server receive order:
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order
Query Time Order; the counter is:2

看到没,虽然打印了这么多数据,但是你仔细看 Query Time Ord; the counter is:1这条数据,在这条数据上counter变成了1,说明了服务器端接受到了第一条数据,这说明了发生了TCP粘包,第一条数据包含57条数据。同样Query Time Ord; the counter is:2说明了服务器接受了第二条数据,第二条数据包含43条数据总共100条数据。

客户端运行结果

Now is:Bad Order
Bad Order
; the counter is:1

按照我们的设计,客户端应该收到100条数据,但实际上只收到了1条,这不难理解,因为服务器端只收到了2条数据,所以实际服务端之发了二条应答,因为客户端也发生的粘包,所以客户端只收到1条数据。

用LineBasedFrameDecoder和StringDecoder来解决TCP粘包问题

看一下Server端的更改:

public class ChildChannelHandler extends
        ChannelInitializer<SocketChannel> {

    @Override
    public void initChannel(SocketChannel ch) throws Exception {
        //关键在下面二句,新增了二个解码器
        ch.pipeline().addLast(new LineBasedFrameDecoder(1024));
        ch.pipeline().addLast(new StringDecoder());
        ch.pipeline().addLast(new TimeServerHandler());
    }
}

与之前的ChildChannelHandler相比,增加了二个解码器,LineBasedFrameDecoder和StringDecoder。

大家别着急后面会介绍。

public class TimeServerHandler extends SimpleChannelInboundHandler {

private int counter;

@Override

protected void channelRead0(ChannelHandlerContext ctx, Object msg)

throws Exception {

String body = (String) msg;

System.out.println(“The Time Server receive order:”+body+”; the counter is:”+ (++counter));

            String currentTime = "Query Time Order".equalsIgnoreCase(body)?new Date(System.currentTimeMillis()).toString():"Bad Order";
            currentTime = currentTime+System.getProperty("line.separator");
            ByteBuf resp  = Unpooled.copiedBuffer(currentTime.getBytes());
            ctx.writeAndFlush(resp);
    }
    @Override
    public void channelReadComplete(ChannelHandlerContext ctx) throws Exception {
        ctx.flush();
    }
    @Override
    public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause)
            throws Exception {
        ctx.close();
    }

}

如果你细心会发现channelRead0中处理接受到的数据再也不需要对请求的消息编码,代码非常整洁。

再看一下客户端

public class TimeClient {
    public void connect(int port,String host) throws Exception{
        EventLoopGroup group = new NioEventLoopGroup();
        try {
            Bootstrap b = new Bootstrap();
            b.group(group).channel(NioSocketChannel.class)
            .option(ChannelOption.TCP_NODELAY,true)
            .handler(new ChannelInitializer<SocketChannel>() {
                @Override
                protected void initChannel(SocketChannel ch) throws Exception {
                    ch.pipeline().addLast(new LineBasedFrameDecoder(1024));
                    ch.pipeline().addLast(new StringDecoder());
                    ch.pipeline().addLast(new TimeClientHandler());
                }
            });
            ChannelFuture f = b.connect(host,port).sync();
            f.channel().closeFuture().sync();
        } catch (Exception e) {
        }finally{
            group.shutdownGracefully();
        }

    }
    /**
     * 入口
     * @param args
     * @throws Exception
     */
    public static void main(String[] args) throws Exception{
        int port = 9090; //监听端口号
        new TimeClient().connect(port, "localhost");
    }
}

与上面类似在initChannel方法中增加了二个解码器,LineBasedFrameDecoder和StringDecoder。

下面我们再看一下TimeClientHandler类,代码如下:

public class TimeClientHandler extends SimpleChannelInboundHandler {
    private static final Logger logger = Logger.getLogger(TimeClientHandler.class.getName());
    private int counter;
    private byte[] req;

    public TimeClientHandler(){
        //这里介绍一下System.getProperty("line.separator") // 直线分隔符
         req  =  ("Query Time Order"+System.getProperty("line.separator")).getBytes();
    }

    @Override
    public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause)
            throws Exception {
        logger.warning("Unexpected exception from downstream:"+cause.getMessage());
        ctx.close();
    }

    @Override
    public void channelActive(ChannelHandlerContext ctx) throws Exception {
        ByteBuf message = null;
        for(int i=0;i<100;i++){
            message = Unpooled.buffer(req.length);
            message.writeBytes(req);
            ctx.writeAndFlush(message);
        }
    }

    @Override
    protected void channelRead0(ChannelHandlerContext ctx, Object msg)
            throws Exception {

        String body = (String)msg;
        System.out.println("Now is:"+body+"; the counter is:"+(++counter));
    }

}

看到没在channelRead0方法中拿到的msg,已经是解码成字符串之后的应答消息了,与之前相比,简单太多了。

添加粘包/拆包后服务器端运行结果

The Time Server receive order:Query Time Order; the counter is:1
The Time Server receive order:Query Time Order; the counter is:2
The Time Server receive order:Query Time Order; the counter is:3
The Time Server receive order:Query Time Order; the counter is:4
The Time Server receive order:Query Time Order; the counter is:5
The Time Server receive order:Query Time Order; the counter is:6
The Time Server receive order:Query Time Order; the counter is:7
The Time Server receive order:Query Time Order; the counter is:8
The Time Server receive order:Query Time Order; the counter is:9
The Time Server receive order:Query Time Order; the counter is:10
The Time Server receive order:Query Time Order; the counter is:11
The Time Server receive order:Query Time Order; the counter is:12
The Time Server receive order:Query Time Order; the counter is:13
The Time Server receive order:Query Time Order; the counter is:14
The Time Server receive order:Query Time Order; the counter is:15
The Time Server receive order:Query Time Order; the counter is:16
The Time Server receive order:Query Time Order; the counter is:17
The Time Server receive order:Query Time Order; the counter is:18
The Time Server receive order:Query Time Order; the counter is:19
The Time Server receive order:Query Time Order; the counter is:20
The Time Server receive order:Query Time Order; the counter is:21
The Time Server receive order:Query Time Order; the counter is:22
The Time Server receive order:Query Time Order; the counter is:23
The Time Server receive order:Query Time Order; the counter is:24
The Time Server receive order:Query Time Order; the counter is:25
The Time Server receive order:Query Time Order; the counter is:26
The Time Server receive order:Query Time Order; the counter is:27
The Time Server receive order:Query Time Order; the counter is:28
The Time Server receive order:Query Time Order; the counter is:29
The Time Server receive order:Query Time Order; the counter is:30
The Time Server receive order:Query Time Order; the counter is:31
The Time Server receive order:Query Time Order; the counter is:32
The Time Server receive order:Query Time Order; the counter is:33
The Time Server receive order:Query Time Order; the counter is:34
The Time Server receive order:Query Time Order; the counter is:35
The Time Server receive order:Query Time Order; the counter is:36
The Time Server receive order:Query Time Order; the counter is:37
The Time Server receive order:Query Time Order; the counter is:38
The Time Server receive order:Query Time Order; the counter is:39
The Time Server receive order:Query Time Order; the counter is:40
The Time Server receive order:Query Time Order; the counter is:41
The Time Server receive order:Query Time Order; the counter is:42
The Time Server receive order:Query Time Order; the counter is:43
The Time Server receive order:Query Time Order; the counter is:44
The Time Server receive order:Query Time Order; the counter is:45
The Time Server receive order:Query Time Order; the counter is:46
The Time Server receive order:Query Time Order; the counter is:47
The Time Server receive order:Query Time Order; the counter is:48
The Time Server receive order:Query Time Order; the counter is:49
The Time Server receive order:Query Time Order; the counter is:50
The Time Server receive order:Query Time Order; the counter is:51
The Time Server receive order:Query Time Order; the counter is:52
The Time Server receive order:Query Time Order; the counter is:53
The Time Server receive order:Query Time Order; the counter is:54
The Time Server receive order:Query Time Order; the counter is:55
The Time Server receive order:Query Time Order; the counter is:56
The Time Server receive order:Query Time Order; the counter is:57
The Time Server receive order:Query Time Order; the counter is:58
The Time Server receive order:Query Time Order; the counter is:59
The Time Server receive order:Query Time Order; the counter is:60
The Time Server receive order:Query Time Order; the counter is:61
The Time Server receive order:Query Time Order; the counter is:62
The Time Server receive order:Query Time Order; the counter is:63
The Time Server receive order:Query Time Order; the counter is:64
The Time Server receive order:Query Time Order; the counter is:65
The Time Server receive order:Query Time Order; the counter is:66
The Time Server receive order:Query Time Order; the counter is:67
The Time Server receive order:Query Time Order; the counter is:68
The Time Server receive order:Query Time Order; the counter is:69
The Time Server receive order:Query Time Order; the counter is:70
The Time Server receive order:Query Time Order; the counter is:71
The Time Server receive order:Query Time Order; the counter is:72
The Time Server receive order:Query Time Order; the counter is:73
The Time Server receive order:Query Time Order; the counter is:74
The Time Server receive order:Query Time Order; the counter is:75
The Time Server receive order:Query Time Order; the counter is:76
The Time Server receive order:Query Time Order; the counter is:77
The Time Server receive order:Query Time Order; the counter is:78
The Time Server receive order:Query Time Order; the counter is:79
The Time Server receive order:Query Time Order; the counter is:80
The Time Server receive order:Query Time Order; the counter is:81
The Time Server receive order:Query Time Order; the counter is:82
The Time Server receive order:Query Time Order; the counter is:83
The Time Server receive order:Query Time Order; the counter is:84
The Time Server receive order:Query Time Order; the counter is:85
The Time Server receive order:Query Time Order; the counter is:86
The Time Server receive order:Query Time Order; the counter is:87
The Time Server receive order:Query Time Order; the counter is:88
The Time Server receive order:Query Time Order; the counter is:89
The Time Server receive order:Query Time Order; the counter is:90
The Time Server receive order:Query Time Order; the counter is:91
The Time Server receive order:Query Time Order; the counter is:92
The Time Server receive order:Query Time Order; the counter is:93
The Time Server receive order:Query Time Order; the counter is:94
The Time Server receive order:Query Time Order; the counter is:95
The Time Server receive order:Query Time Order; the counter is:96
The Time Server receive order:Query Time Order; the counter is:97
The Time Server receive order:Query Time Order; the counter is:98
The Time Server receive order:Query Time Order; the counter is:99
The Time Server receive order:Query Time Order; the counter is:100

添加粘包/拆包后客户端运行结果

Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:1
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:2
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:3
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:4
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:5
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:6
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:7
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:8
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:9
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:10
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:11
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:12
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:13
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:14
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:15
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:16
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:17
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:18
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:19
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:20
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:21
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:22
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:23
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:24
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:25
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:26
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:27
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:28
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:29
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:30
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:31
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:32
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:33
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:34
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:35
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:36
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:37
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:38
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:39
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:40
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:41
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:42
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:43
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:44
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:45
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:46
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:47
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:48
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:49
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:50
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:51
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:52
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:53
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:54
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:55
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:56
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:57
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:58
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:59
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:60
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:61
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:62
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:63
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:64
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:65
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:66
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:67
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:68
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:69
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:70
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:71
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:72
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:73
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:74
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:75
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:76
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:77
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:78
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:79
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:80
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:81
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:82
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:83
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:84
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:85
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:86
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:87
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:88
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:89
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:90
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:91
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:92
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:93
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:94
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:95
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:96
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:97
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:98
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:99
Now is:Sat Jul 16 11:44:22 CST 2016; the counter is:100

这下看来,运行结果完全符合预期的效果!说明LineBaseFrameDecoder和StringDecoder成功解决了TCP粘包导致的读半包问题。对于使用者来说只需要将支持半包解码的handler添加到ChannelPipeLine中即可,例如本例中:

ch.pipeline().addLast(new LineBasedFrameDecoder(1024));
ch.pipeline().addLast(new StringDecoder());

并不需要写太多的代码,非常简单。

LineBaseFrameDecoder和StringDecoder原理解析

LineBaseFrameDecoder的工作原理是它依次遍历ByteBuf中的可读字节,判断看是否有”\n”或者”\r\n”,如果有,就以此位置为结束位置,从可读索引到结束位置区间的字节组成一行。它是以换行符为结束标志的解码器,支持携带结束符或者不携带结束符两种解码方式,同时支持配置单行的最大长度。如果连续读取到最大长度后仍然没有换行符,就会抛出异常,同时忽略掉之前读到到异常码流。

StringDecoder的功能更简单,将接收到的对象转换成字符串,然后继续调用后面的handler,LineBaseFrameDecoder+StringDecoder组合就是按行切换的文本解码器,它被设计用来支持TCP的粘包和拆包。

结尾

好了到此结束吧,后面再讲解更多的解码器。Netty提供很多种Tcp粘包/拆包解码器。

希望对大家有所帮助。

时间: 2024-10-21 09:19:53

netty拆包/粘包的解决方案的相关文章

使用Netty如何解决拆包粘包的问题

首先,我们通过一个DEMO来模拟TCP的拆包粘包的情况:客户端连续向服务端发送100个相同消息.服务端的代码如下: AtomicLong count = new AtomicLong(0); NioEventLoopGroup boss = new NioEventLoopGroup(); NioEventLoopGroup worker = new NioEventLoopGroup(); ServerBootstrap serverBootstrap = new ServerBootstra

服务端NETTY 客户端非NETTY处理粘包和拆包的问题

之前为了调式和方便一直没有处理粘包的问题,今天专门花了时间来搞NETTY的粘包处理,要知道在高并发下,不处理粘包是不可能的,数据流的混乱会造成业务的崩溃什么的我就不说了.所以这个问题 在我心里一直是个结. 使用NETTY真的很幸福,以前用C写服务端 还的自己处理粘包的问题 各种痛苦 不过那也是基本功 没办法的事情.在NETTY里面 有几个拆个包器 我使用的是 LengthFileldBasedFrameDecoder,这个用来解析带有长度属性的包,只要我们在传输协议中加入包的总长度就行 arg0

Mina框架断包、粘包问题解决方案

Mina框架断包.粘包问题解决方案 Apache Mina Server 是一个网络通信应用框架,也就是说,它主要是对基于TCP/IP.UDP/IP协议栈的通信框架(当然,也可以提供JAVA 对象的序列化服务.虚拟机管道通信服务等),Mina 可以帮助我们快速开发高性能.高扩展性的网络通信应用,Mina 提供了事件驱动.异步(Mina 的异步IO 默认使用的是JAVA NIO 作为底层支持)操作的编程模型. 在mina中,一般的应用场景用TextLine的Decode和Encode就够用了(Te

AsyncSocket长连接粘包问题解决方案

工程中使用长连接来和服务器进行通讯,因此,我们的协议通过指定前两个字节为数据长度来区分数据包 app这边数据有两种传输形式: 1.app主动请求所需要的数据: 2.app异步接收来自服务端的推送消息,也就是app这边没有请求,服务端主动发送数据到app客户端: 整个app运行期间,它们都是在同一个连接上完成的数据传输,因此会出现以下的问题: 1.服务器数据传输过快,出现粘包的问题,例如 1.1服务端一次发来多个推送消息: 1.2网络不稳定,客户端连续发送多个请求客户端一次接收到全部答复: 2.客

Netty中粘包和拆包的解决方案

粘包和拆包是TCP网络编程中不可避免的,无论是服务端还是客户端,当我们读取或者发送消息的时候,都需要考虑TCP底层的粘包/拆包机制. TCP粘包和拆包 TCP是个“流”协议,所谓流,就是没有界限的一串数据.TCP底层并不了解上层业务数据的具体含义,它会根据TCP缓冲区的实际情况进行包的划分,所以在业务上认为,一个完整的包可能会被TCP拆分成多个包进行发送,也有可能把多个小的包封装成一个大的数据包发送,这就是所谓的TCP粘包和拆包问题. 如图所示,假设客户端分别发送了两个数据包D1和D2给服务端,

3.Netty的粘包、拆包(二)

Netty提供的TCP数据拆包.粘包解决方案 1.前言 关于TCP的数据拆包.粘包的介绍,我在上一篇文章里面已经有过介绍. 想要了解一下的,请点击这里 Chick Here! 今天我们要讲解的是Netty提供的两种解决方案: DelimiterBasedFrameDecoder FixedLengthFrameDecoder 2.关于Decoder 先观察下两段代码的不同 (1)使用StringDecoder之前 @Override public void channelRead(Channel

Netty解决粘包和拆包问题的四种方案

在RPC框架中,粘包和拆包问题是必须解决一个问题,因为RPC框架中,各个微服务相互之间都是维系了一个TCP长连接,比如dubbo就是一个全双工的长连接.由于微服务往对方发送信息的时候,所有的请求都是使用的同一个连接,这样就会产生粘包和拆包的问题.本文首先会对粘包和拆包问题进行描述,然后介绍其常用的解决方案,最后会对Netty提供的几种解决方案进行讲解.这里说明一下,由于oschina将"jie ma qi"认定为敏感文字,因而本文统一使用"解码一器"表示该含义 粘包

【游戏开发】Netty TCP粘包/拆包问题的解决办法(二)

上一篇:[Netty4.X]Unity客户端与Netty服务器的网络通信(一) 一.什么是TCP粘包/拆包 如图所示,假如客户端分别发送两个数据包D1和D2给服务端,由于服务端一次读取到的字节数是不确定的,故可能存在以下4中情况: 第一种情况:Server端分别读取到D1和D2,没有产生粘包和拆包的情况. 第二种情况:Server端一次接收到两个数据包,D1和D2粘合在一起,被称为TCP粘包. 第三种情况:Server端分2次读取到2个数据包,第一次读取到D1包和D2包的部分内容D2_1,第二次

2.Netty的粘包、拆包(一)

Netty粘包.拆包 1.什么是拆包.粘包 (1)拆包.粘包介绍 TCP是个"流"协议,所谓流,就是没有界限的一串数据.大家可以想想河里的流水,是连成一片的,其间并没有分界线.TCP底层并不了解上层业务数据的具体含义,它会根据TCP缓冲区的实际情况进行包的划分,所以在业务上认为,一个完整的包可能会被TCP拆分成多个包进行发送,也有可能把多个小的包封装成一个大的数据包发送,这就是所谓的TCP粘包和拆包问题. (2)图解 (3)代码模拟 服务端Server package com.xm.n