red5源码分析---12

red5源码分析—服务器处理视频数据

接着《red5源码分析—11》,本章假设客户端发来的是视频数据,下面就分析服务器如何处理这些数据的。

根据前面几章的分析,基于mina框架,数据到达服务器后,最终会到达RTMPHandler的messageReceived函数,messageReceived定义在RTMPHandler的父类BaseRTMPHandler中,

    public void messageReceived(RTMPConnection conn, Packet packet) throws Exception {
        if (conn != null) {
            IRTMPEvent message = null;
            try {
                message = packet.getMessage();
                final Header header = packet.getHeader();
                final Number streamId = header.getStreamId();
                final Channel channel = conn.getChannel(header.getChannelId());
                final IClientStream stream = conn.getStreamById(streamId);
                conn.setStreamId(streamId);
                conn.messageReceived();
                message.setSource(conn);
                final byte headerDataType = header.getDataType();
                switch (headerDataType) {
                    case TYPE_AGGREGATE:
                    case TYPE_AUDIO_DATA:
                    case TYPE_VIDEO_DATA:
                        message.setSourceType(Constants.SOURCE_TYPE_LIVE);
                        if (stream != null) {
                            ((IEventDispatcher) stream).dispatchEvent(message);
                        }
                        break;
                    case TYPE_FLEX_SHARED_OBJECT:
                    case TYPE_SHARED_OBJECT:
                        ...
                    case TYPE_INVOKE:
                    case TYPE_FLEX_MESSAGE:
                        ...
                    case TYPE_NOTIFY:
                    case TYPE_FLEX_STREAM_SEND:
                        ...
                    case TYPE_PING:
                        ...
                    case TYPE_BYTES_READ:
                        ...
                    case TYPE_CHUNK_SIZE:
                        ...
                    case Constants.TYPE_CLIENT_BANDWIDTH:
                        ...
                    case Constants.TYPE_SERVER_BANDWIDTH:
                        ...
                    default:

                }
                if (message instanceof Unknown) {

                }
            } catch (Throwable t) {

            }
            if (message != null) {
                message.release();
            }
        }
    }

messageReceived函数首先根据streamId获得ClientBroadcastStream,然后调用ClientBroadcastStream的dispatchEvent函数处理消息,

    public void dispatchEvent(IEvent event) {
        if (event instanceof IRTMPEvent && !closed) {
            switch (event.getType()) {
                case STREAM_CONTROL:
                case STREAM_DATA:
                    IRTMPEvent rtmpEvent;
                    try {
                        rtmpEvent = (IRTMPEvent) event;
                    } catch (ClassCastException e) {
                        return;
                    }
                    int eventTime = -1;
                    IoBuffer buf = null;
                    if (rtmpEvent instanceof IStreamData && (buf = ((IStreamData<?>) rtmpEvent).getData()) != null) {
                        bytesReceived += buf.limit();
                    }
                    IStreamCodecInfo codecInfo = getCodecInfo();
                    StreamCodecInfo info = null;
                    if (codecInfo instanceof StreamCodecInfo) {
                        info = (StreamCodecInfo) codecInfo;
                    }
                    if (rtmpEvent instanceof AudioData) {
                        ...
                    } else if (rtmpEvent instanceof VideoData) {
                        IVideoStreamCodec videoStreamCodec = null;
                        if (checkVideoCodec) {
                            videoStreamCodec = VideoCodecFactory.getVideoCodec(buf);
                            if (info != null) {
                                info.setVideoCodec(videoStreamCodec);
                            }
                            checkVideoCodec = false;
                        } else if (codecInfo != null) {
                            videoStreamCodec = codecInfo.getVideoCodec();
                        }
                        if (videoStreamCodec != null) {
                            videoStreamCodec.addData(buf.asReadOnlyBuffer());
                        }
                        if (info != null) {
                            info.setHasVideo(true);
                        }
                        eventTime = rtmpEvent.getTimestamp();
                    } else if (rtmpEvent instanceof Invoke) {
                        ...
                    } else if (rtmpEvent instanceof Notify) {
                        ...
                    }
                    if (eventTime > latestTimeStamp) {
                        latestTimeStamp = eventTime;
                    }
                    checkSendNotifications(event);
                    try {
                        if (livePipe != null) {
                            RTMPMessage msg = RTMPMessage.build(rtmpEvent, eventTime);
                            livePipe.pushMessage(msg);
                        } else {

                        }
                    } catch (IOException err) {
                        stop();
                    }
                    if (rtmpEvent instanceof IStreamPacket) {
                        for (IStreamListener listener : getStreamListeners()) {
                            try {
                                listener.packetReceived(this, (IStreamPacket) rtmpEvent);
                            } catch (Exception e) {
                                if (listener instanceof RecordingListener) {
                                    sendRecordFailedNotify(e.getMessage());
                                }
                            }
                        }
                    }
                    break;
                default:
            }
        } else {

        }
    }

dispatchEvent首先计算接收到的字节数并保存在ClientBroadcastStream的bytesReceived中;接下来通过getCodecInfo获取之前创建的StreamCodecInfo并保存在变量info中;再往下checkVideoCodec再初次接收到数据时会设置为true,因此会通过getVideoCodec创建一个IVideoStreamCodec并返回,

    public static IVideoStreamCodec getVideoCodec(IoBuffer data) {
        IVideoStreamCodec result = null;
        int codecId = data.get() & 0x0f;
        try {
            switch (codecId) {
                case 2:
                    result = (IVideoStreamCodec) Class.forName("org.red5.codec.SorensonVideo").newInstance();
                    break;
                case 3:
                    result = (IVideoStreamCodec) Class.forName("org.red5.codec.ScreenVideo").newInstance();
                    break;
                case 6:
                    result = (IVideoStreamCodec) Class.forName("org.red5.codec.ScreenVideo2").newInstance();
                    break;
                case 7:
                    result = (IVideoStreamCodec) Class.forName("org.red5.codec.AVCVideo").newInstance();
                    break;
            }
        } catch (Exception ex) {

        }
        data.rewind();
        if (result == null) {
            for (IVideoStreamCodec storedCodec : codecs) {
                IVideoStreamCodec codec;
                try {
                    codec = storedCodec.getClass().newInstance();
                } catch (Exception e) {
                    continue;
                }
                if (codec.canHandleData(data)) {
                    result = codec;
                    break;
                }
            }
        }
        return result;
    }

getVideoCodec会先检查传入的参数是否制定了codecId,如果制定了,就根据codecId创建不同的IVideoStreamCodec用于视频编解码,如果codecId未指定,就使用默认的IVideoStreamCodec并返回。

回到ClientBroadcastStream的dispatchEvent函数中,如果成功创建了IVideoStreamCodec,就将其设置进StreamCodecInfo中,这样下次就可以直接通过getVideoCodec获取到了。获得IVideoStreamCodec后,下面就调用其addData函数添加接收到的数据了,下面假设创建的IVideoStreamCodec为ScreenVideo,因此addData函数的代码如下,

    public boolean addData(IoBuffer data) {
        if (!this.canHandleData(data)) {
            return false;
        }

        data.get();
        this.updateSize(data);
        int idx = 0;
        int pos = 0;
        byte[] tmpData = new byte[this.blockDataSize];

        int countBlocks = this.blockCount;
        while (data.remaining() > 0 && countBlocks > 0) {
            short size = data.getShort();
            countBlocks--;
            if (size == 0) {
                idx += 1;
                pos += this.blockDataSize;
                continue;
            }

            this.blockSize[idx] = size;
            data.get(tmpData, 0, size);
            System.arraycopy(tmpData, 0, this.blockData, pos, size);
            idx += 1;
            pos += this.blockDataSize;
        }

        data.rewind();
        return true;
    }

addData函数首先通过canHandleData检查传入的数据属不属于ScreenVideo,这里假设通过;接下来通过updateSize函数更新本次收到的视频信息,updateSize设计到视频方面的只是,这里就不往下看了。下面就通过while循环依次取出传入的数据,一次读入一个视频block的数据,保存在其成员变量blockData中。

回到ClientBroadcastStream的dispatchEvent函数中,经过一些简单的设置后,checkSendNotifications函数什么也没做,再往下,dispatchEvent函数根据接收到的数据构造了一个RTMPMessage并调用InMemoryPushPushPipe的pushMessage函数,这里暂时不往下分析了,因为此时该InMemoryPushPushPipe并没有对应的Consumer,等后面的章节分析完了其他客户端函数后再来分析这里。

ClientBroadcastStream的dispatchEvent函数最后遍历所有的监听器,并调用其对应的packetReceived函数,根据《red5源码分析—10》的分析,这里的监听器为RecordingListener,其packetReceived函数定义如下,

    public void packetReceived(IBroadcastStream stream, IStreamPacket packet) {
        if (recording.get()) {
            CachedEvent event = new CachedEvent();
            event.setData(packet.getData().duplicate());
            event.setDataType(packet.getDataType());
            event.setReceivedTime(System.currentTimeMillis());
            event.setTimestamp(packet.getTimestamp());
            if (!queue.add(event)) {

            }
        } else {

        }
    }

根据《red5源码分析—10》的分析,这里recording.get()返回true,因此packetReceived函数接下来创建了CachedEvent用来保存刚刚收到的数据,并添加到其成员变量queue中。

在《red5源码分析—10》中分析过,RecordingListener的start函数中会创建工作线程EventQueueJob用来处理queue中的事务,下面来看,

        public void execute(ISchedulingService service) {
            if (processing.compareAndSet(false, true)) {
                try {
                    if (!queue.isEmpty()) {
                        while (!queue.isEmpty()) {
                            processQueue();
                        }
                    } else {

                    }
                } catch (Exception e) {

                } finally {
                    processing.set(false);
                }
            }
        }

其实该函数最主要的工作就是循环检查queue的内容是否为空,如果不为空就通过processQueue进行处理,processQueue函数定义在RecordingListener中,代码如下

    private void processQueue() {
        CachedEvent cachedEvent;
        try {
            IRTMPEvent event = null;
            RTMPMessage message = null;
            cachedEvent = queue.poll();
            if (cachedEvent != null) {
                final byte dataType = cachedEvent.getDataType();
                IoBuffer buffer = cachedEvent.getData();
                int bufferLimit = buffer.limit();
                if (bufferLimit > 0) {
                    switch (dataType) {
                        case Constants.TYPE_AGGREGATE:
                            event = new Aggregate(buffer);
                            event.setTimestamp(cachedEvent.getTimestamp());
                            message = RTMPMessage.build(event);
                            break;
                        case Constants.TYPE_AUDIO_DATA:
                            event = new AudioData(buffer);
                            event.setTimestamp(cachedEvent.getTimestamp());
                            message = RTMPMessage.build(event);
                            break;
                        case Constants.TYPE_VIDEO_DATA:
                            event = new VideoData(buffer);
                            event.setTimestamp(cachedEvent.getTimestamp());
                            message = RTMPMessage.build(event);
                            break;
                        default:
                            event = new Notify(buffer);
                            event.setTimestamp(cachedEvent.getTimestamp());
                            message = RTMPMessage.build(event);
                            break;
                    }
                    recordingConsumer.pushMessage(null, message);
                } else if (bufferLimit == 0 && dataType == Constants.TYPE_AUDIO_DATA) {
                    event = new AudioData(IoBuffer.allocate(0));
                    event.setTimestamp(cachedEvent.getTimestamp());
                    message = RTMPMessage.build(event);
                    recordingConsumer.pushMessage(null, message);
                } else {

                }
            }
        } catch (Exception e) {

        }
    }

processQueue函数轮询queue,查看是否有事件发生,如果有事件就根据事件的类型(本章假设为视频)创建VideoData,继而创建RTMPMessage,然后通过recordingConsumer的pushMessage函数继续发送出去,recordingConsumer的类型为FileConsumer,下面来看它的pushMessage函数,

    public void pushMessage(IPipe pipe, IMessage message) throws IOException {
        if (message instanceof RTMPMessage) {
            final IRTMPEvent msg = ((RTMPMessage) message).getBody();
            byte dataType = msg.getDataType();
            int timestamp = msg.getTimestamp();
            if (!(msg instanceof FlexStreamSend)) {
                lastTimestamp = timestamp;
            }
            if (msg instanceof VideoData) {
                if (!gotVideoKeyFrame) {
                    VideoData video = (VideoData) msg;
                    if (video.getFrameType() == FrameType.KEYFRAME) {
                        gotVideoKeyFrame = true;
                    } else {
                        return;
                    }
                }
            }
            if (writer == null) {
                init();
            }
            if (!delayWrite) {
                write(timestamp, msg);
            } else {
                ...
            }
        } else if (message instanceof ResetMessage) {
            startTimestamp = -1;
        }
    }

因为FileConsumer的成员变量delayWrite默认为false,因此这里重点看两个函数,一个是init初始化writer,另一个就是write函数将数据写入文件了。

下面先来看init函数,

    private void init() throws IOException {
        if (file != null) {
            if (delayWrite) {
                ...
            }
            IStreamableFileFactory factory = (IStreamableFileFactory) ScopeUtils.getScopeService(scope, IStreamableFileFactory.class, StreamableFileFactory.class);
            File folder = file.getParentFile();
            if (!folder.exists()) {
                if (!folder.mkdirs()) {
                    throw new IOException("Could not create parent folder");
                }
            }
            if (!file.isFile()) {
                file.createNewFile();
            } else if (!file.canWrite()) {
                throw new IOException("The file is read-only");
            }
            IStreamableFileService service = factory.getService(file);
            IStreamableFile flv = service.getStreamableFile(file);
            if (mode == null || mode.equals(IClientStream.MODE_RECORD)) {
                writer = flv.getWriter();
                if (videoConfigurationTag != null) {
                    writer.writeTag(videoConfigurationTag);
                    videoConfigurationTag = null;
                }
                if (audioConfigurationTag != null) {
                    writer.writeTag(audioConfigurationTag);
                    audioConfigurationTag = null;
                }
            } else if (mode.equals(IClientStream.MODE_APPEND)) {
                writer = flv.getAppendWriter();
            } else {
                throw new IllegalStateException(String.format("Illegal mode type: %s", mode));
            }
        } else {

        }
    }

FileConsumer的init函数首先检查可写入文件的合法性,然后就调用StreamableFileFactory的getService函数获取合适的service来读写file,StreamableFileFactory由Spring生成,默认包含了4个service,配置在red5-common.xml中,

    <bean id="streamableFileFactory" class="org.red5.server.stream.StreamableFileFactory">
        <property name="services">
            <list>
                <bean id="flvFileService" class="org.red5.server.service.flv.impl.FLVService">
                    <property name="generateMetadata" value="true"/>
                </bean>
                <bean id="mp3FileService" class="org.red5.server.service.mp3.impl.MP3Service"/>
                <bean id="mp4FileService" class="org.red5.server.service.mp4.impl.MP4Service"/>
                <bean id="m4aFileService" class="org.red5.server.service.m4a.impl.M4AService"/>
            </list>
        </property>
    </bean>

可以看到Spring自动帮助StreamableFileFactory注入了4个Service,分别是FLVService、MP3Service、MP4Service和M4AService。

再来看StreamableFileFactory的getService方法,

    public IStreamableFileService getService(File fp) {
        for (IStreamableFileService service : this.services) {
            if (service.canHandle(fp)) {
                return service;
            }
        }
        return null;
    }

这里依次取出注册在StreamableFileFactory的Service,调用每个Service的canHandle函数,查看是否可以处理对应的文件,下面以FLVService为例来看canHandle函数,定义在其父类BaseStreamableFileService中,

    public boolean canHandle(File file) {
        boolean valid = false;
        if (file.exists()) {
            String absPath = file.getAbsolutePath().toLowerCase();
            int dotIndex = absPath.lastIndexOf(‘.‘);
            if (dotIndex > -1) {
                String fileExt = absPath.substring(dotIndex);
                String[] exts = getExtension().split(",");
                for (String ext : exts) {
                    if (ext.equals(fileExt)) {
                        valid = true;
                        break;
                    }
                }
            } else {

            }
        }
        return valid;
    }

canHandle函数简单来说就是查看文件的扩展名是否在FLVService可以处理的扩展名集合中,函数中getExtension返回FLVService可以支持的拓展名集合,定义在FLVService中,代码如下

    public String getExtension() {
        return ".flv";
    }

因此,FLVService只支持拓展名为.flv的文件。

回到FileConsumer的init函数中,获得FLVService之后,就通过getStreamableFile获得对应的流文件,定义在FLVService中,代码如下,

    public IStreamableFile getStreamableFile(File file) throws IOException {
        return new FLV(file, generateMetadata);
    }

回到FileConsumer的init函数中,得到getStreamableFile函数返回的FLV对象后,假设mode为IClientStream.MODE_RECORD,因此接下来通过getWriter获得FLV对象的FLVWriter,

    public ITagWriter getWriter() throws IOException {
        if (file.exists()) {
            file.delete();
        }
        file.createNewFile();
        ITagWriter writer = new FLVWriter(file, false);
        return writer;
    }

getWriter函数根据file构造一个FLVWriter,定义在FLVWriter中,

    public FLVWriter(File file, boolean append) {
        filePath = file.getAbsolutePath();
        try {
            this.append = append;
            if (append) {
                timeOffset = FLVReader.getDuration(file);
                duration = timeOffset;
                this.dataFile = new RandomAccessFile(file, "rw");
                if (!file.exists() || !file.canRead() || !file.canWrite()) {

                } else {
                    bytesWritten = file.length();
                }
                if (duration == 0) {
                    dataFile.seek(META_POSITION);
                }
            } else {
                File dat = new File(filePath + ".ser");
                if (dat.exists()) {
                    dat.delete();
                    dat.createNewFile();
                }
                this.dataFile = new RandomAccessFile(dat, "rw");
            }
        } catch (Exception e) {

        }
    }

该函数就不细看了,就是根据传入的File构造了一个RandomAccessFile。

回到FileConsumer的init函数中,如果videoConfigurationTag不为空,下面就要通过刚刚创建的FLVWriter的writeTag函数写入videoConfigurationTag了,因为writeTag涉及到视频方面的只是,这里就不往下看了。

回到FileConsumer的pushMessage函数中,接下来就要通过write函数保存视频数据了,

    private final void write(int timestamp, IRTMPEvent msg) {
        byte dataType = msg.getDataType();
        IoBuffer data = ((IStreamData<?>) msg).getData();
        if (data != null) {
            if (startTimestamp == -1) {
                startTimestamp = timestamp;
                timestamp = 0;
            } else {
                timestamp -= startTimestamp;
            }
            ITag tag = ImmutableTag.build(dataType, timestamp, data, 0);
            if (tag.getBodySize() > 0 || dataType == ITag.TYPE_AUDIO) {
                try {
                    if (timestamp >= 0) {
                        if (!writer.writeTag(tag)) {

                        }
                    } else {

                    }
                } catch (IOException e) {

                } finally {
                    if (data != null) {
                        data.clear();
                        data.free();
                    }
                }
            }
        }
    }

FileConsumer的write首先设置视频资源的timestamp,接着通过ImmutableTag的build函数构造一个ImmutableTag,代码如下,

    public static ImmutableTag build(byte dataType, int timestamp, IoBuffer data, int previousTagSize) {
        if (data != null) {
            byte[] body = new byte[data.limit()];
            int pos = data.position();
            data.get(body);
            data.position(pos);
            return new ImmutableTag(dataType, timestamp, body, previousTagSize);
        } else {
            return new ImmutableTag(dataType, timestamp, null, previousTagSize);
        }
    }

ImmutableTag的构造函数很简单,就是简单地设置一下成员变量。

回到FileConsumer的write函数中,接下来就通过FLVWriter的writeTag函数写入数据了,代码如下,

    public boolean writeTag(ITag tag) throws IOException {
        try {
            lock.acquire();
            long prevBytesWritten = bytesWritten;
            int bodySize = tag.getBodySize();
            int previousTagSize = tag.getPreviousTagSize();
            if (previousTagSize != lastTagSize) {

            }
            if (dataFile != null) {
                byte dataType = tag.getDataType();
                IoBuffer tagBody = tag.getBody();
                if (dataType != ITag.TYPE_METADATA) {
                    long fileOffset = dataFile.getFilePointer();
                    if (fileOffset < prevBytesWritten) {
                        dataFile.seek(prevBytesWritten);
                    }
                } else {
                    tagBody.mark();
                    Input metadata = new Input(tagBody);
                    metadata.readDataType();
                    String metaType = metadata.readString();
                    try {
                        tagBody.reset();
                    } catch (InvalidMarkException e) {

                    }
                    if (!"onCuePoint".equals(metaType)) {
                        metaTags.put(System.currentTimeMillis(), tag);
                        return true;
                    }
                }
                int totalTagSize = TAG_HEADER_LENGTH + bodySize + 4;
                dataFile.setLength(dataFile.length() + totalTagSize);
                ByteBuffer tagBuffer = ByteBuffer.allocate(totalTagSize);
                int timestamp = tag.getTimestamp() + timeOffset;
                byte[] bodyBuf = null;
                if (bodySize > 0) {
                    bodyBuf = new byte[bodySize];
                    tagBody.get(bodyBuf);
                    if (dataType == ITag.TYPE_AUDIO) {
                        ...
                    } else if (dataType == ITag.TYPE_VIDEO) {
                        videoDataSize += bodySize;
                        if (videoCodecId == -1) {
                            int id = bodyBuf[0] & 0xff; // must be unsigned
                            videoCodecId = id & ITag.MASK_VIDEO_CODEC;
                        }
                    }
                }
                IOUtils.writeUnsignedByte(tagBuffer, dataType); //1
                IOUtils.writeMediumInt(tagBuffer, bodySize); //3
                IOUtils.writeExtendedMediumInt(tagBuffer, timestamp); //4
                tagBuffer.put(DEFAULT_STREAM_ID); //3
                if (bodyBuf != null) {
                    tagBuffer.put(bodyBuf);
                }
                tagBuffer.putInt(TAG_HEADER_LENGTH + bodySize);
                tagBuffer.flip();
                dataFile.write(tagBuffer.array());
                bytesWritten = dataFile.length();
                lastTagSize = TAG_HEADER_LENGTH + bodySize;
                tagBuffer.clear();
                duration = Math.max(duration, timestamp);
                if ((bytesWritten - prevBytesWritten) != totalTagSize) {

                }
                return true;
            } else {

            }
        } catch (InterruptedException e) {

        } finally {
            updateInfoFile();
            lock.release();
        }
        return false;
    }

FLVWriter的writeTag函数还比较长,涉及到一些如何保存数据的问题,这里简要分析一下这个函数。首先构造了一个缓存tagBuffer,向其中写入数据类型(即视频),数据长度,timestamp,streamId等信息,然后再向tagBuffer中写入数据,最重要的一步是通过dataFile的write函数写入tagBuffer中的数据,最后通过updateInfoFile更新文件信息,

    private void updateInfoFile() {
        RandomAccessFile infoFile = null;
        try {
            infoFile = new RandomAccessFile(filePath + ".info", "rw");
            infoFile.writeInt(audioCodecId);
            infoFile.writeInt(videoCodecId);
            infoFile.writeInt(duration);
            infoFile.writeInt(audioDataSize);
            infoFile.writeInt(soundRate);
            infoFile.writeInt(soundSize);
            infoFile.writeInt(soundType ? 1 : 0);
            infoFile.writeInt(videoDataSize);
        } catch (Exception e) {

        } finally {
            if (infoFile != null) {
                try {
                    infoFile.close();
                } catch (IOException e) {
                }
            }
        }
    }

下一章开始分析客户端如何播放服务器上的数据。

时间: 2024-10-03 08:05:24

red5源码分析---12的相关文章

Solr4.8.0源码分析(12)之Lucene的索引文件(5)

Solr4.8.0源码分析(12)之Lucene的索引文件(5) 1. 存储域数据文件(.fdt和.fdx) Solr4.8.0里面使用的fdt和fdx的格式是lucene4.1的.为了提升压缩比,StoredFieldsFormat以16KB为单位对文档进行压缩,使用的压缩算法是LZ4,由于它更着眼于速度而不是压缩比,所以它能快速压缩以及解压. 1.1 存储域数据文件(.fdt) 真正保存存储域(stored field)信息的是fdt文件,该文件存放了压缩后的文档,按16kb或者更大的模块大

red5源码分析---10

red5源码分析-服务器处理publish命令 和前几章的分析一样,服务器接收到客户端发来的publish命令后,最终会执行RTMPHandler的onCommand函数,再参考<red5源码分析-8>的分析,最终会调用StreamService的publish方法,代码如下 public void publish(String name, String mode) { Map<String, String> params = null; if (name != null &

red5源码分析---6

red5源码分析-客户端和服务器的命令处理 在<red5源码分析-5>中可以知道,在RTMP握手完毕后,客户端会向服务器发送connect命令,connect命令的主要作用就是要和red5服务器上的某个Scope相连接,连接完成后,会向客户端发送带宽协调的指令,ping指令,和一个带宽检测指令.下面先分析ping指令. ping指令 服务端代码 这里先贴一下在服务器将客户端和某个Scope相连后发出的ping指令代码, ... conn.ping(new Ping(Ping.STREAM_BE

red5源码分析---9

red5源码分析-客户端publish流 接着上一章的分析结果,参考<red5源码分析-7>的分析结论,当服务器返回steamId后,客户端会执行BaseRTMPClientHandler的onCommand函数,onCommand函数会根据返回的方法名"_result"开始执行handlePendingCallResult函数,handlePendingCallResult会获取之前注册的回调函数,根据<red5源码分析-7>,该回调函数就为CreateStr

red5源码分析---8

red5源码分析-服务器处理createStream命令 服务器接到createStream命令后,经过过滤器层层处理,最后会调用BaseRTMPHandler的messageReceived函数, public void messageReceived(RTMPConnection conn, Packet packet) throws Exception { if (conn != null) { IRTMPEvent message = null; try { message = pack

red5源码分析---7

red5源码分析-客户端处理connect命令并发送createStream命令 在<red5源码分析-5>中提到过,当客户端发送connect命令后,服务器经过处理会将其connect命令返回,不同的是服务器返回的结果包含了一些连接后需要发送给客户端的信息,包括服务器版本.模式等等.当返回的信息经过服务器的发送过滤器RTMPMinaProtocolEncoder时,会调用其中的RTMPProtocolEncoder的encodeCommand函数,下面来看其中的一段代码, protected

Mesos源码分析(12): Mesos-Slave接收到RunTask消息

在前文Mesos源码分析(8): Mesos-Slave的初始化中,Mesos-Slave接收到RunTaskMessage消息,会调用Slave::runTask. ? void Slave::runTask( ????const UPID& from, ????const FrameworkInfo& frameworkInfo, ????const FrameworkID& frameworkId_, ????const UPID& pid, ????TaskInfo

第二人生的源码分析 12 天空显示的实现

在虚拟世界里,自然现象的实现是最需要实现的,比如天空的实现,以便反映是白天还是晚上,这样才逼真反映现实世界.在第二人生里实现的天空,还是比较好的,如下图所示: 蔡军生 2008/01/10 QQ:9073204 深圳 从上面的图片里,可以看到太阳在远处,并具有雾化的效果,这是早上太阳升起的效果.看到远处是浅蓝色的天空,与海边连接成一体.在室外场境的模拟中,最重要的就是天空体的实现.目前实现天空体有两种不同的实现方式:天体盒和天空穹.而第二人生里是采用天空盒的实现方式,这种方式是渲染的速度比较快,

Mesos源码分析

Mesos源码分析(1): Mesos的启动过程总论 Mesos源码分析(2): Mesos Master的启动之一 Mesos源码分析(3): Mesos Master的启动之二 Mesos源码分析(4) Mesos Master的启动之三 Mesos源码分析(5): Mesos Master的启动之四 Mesos源码分析(6): Mesos Master的初始化 Mesos源码分析(7): Mesos-Slave的启动 Mesos源码分析(8): Mesos-Slave的初始化 Mesos源