Flink - state管理

Flink – Checkpoint

没有描述了整个checkpoint的流程,但是对于如何生成snapshot和恢复snapshot的过程,并没有详细描述,这里补充

 

StreamOperator

/**
 * Basic interface for stream operators. Implementers would implement one of
 * {@link org.apache.flink.streaming.api.operators.OneInputStreamOperator} or
 * {@link org.apache.flink.streaming.api.operators.TwoInputStreamOperator} to create operators
 * that process elements.
 *
 * <p> The class {@link org.apache.flink.streaming.api.operators.AbstractStreamOperator}
 * offers default implementation for the lifecycle and properties methods.
 *
 * <p> Methods of {@code StreamOperator} are guaranteed not to be called concurrently. Also, if using
 * the timer service, timer callbacks are also guaranteed not to be called concurrently with
 * methods on {@code StreamOperator}.
 *
 * @param <OUT> The output type of the operator
 */
public interface StreamOperator<OUT> extends Serializable {

    // ------------------------------------------------------------------------
    //  life cycle
    // ------------------------------------------------------------------------

    /**
     * Initializes the operator. Sets access to the context and the output.
     */
    void setup(StreamTask<?, ?> containingTask, StreamConfig config, Output<StreamRecord<OUT>> output);

    /**
     * This method is called immediately before any elements are processed, it should contain the
     * operator‘s initialization logic.
     *
     * @throws java.lang.Exception An exception in this method causes the operator to fail.
     */
    void open() throws Exception;

    /**
     * This method is called after all records have been added to the operators via the methods
     * {@link org.apache.flink.streaming.api.operators.OneInputStreamOperator#processElement(StreamRecord)}, or
     * {@link org.apache.flink.streaming.api.operators.TwoInputStreamOperator#processElement1(StreamRecord)} and
     * {@link org.apache.flink.streaming.api.operators.TwoInputStreamOperator#processElement2(StreamRecord)}.

     * <p>
     * The method is expected to flush all remaining buffered data. Exceptions during this flushing
     * of buffered should be propagated, in order to cause the operation to be recognized asa failed,
     * because the last data items are not processed properly.
     *
     * @throws java.lang.Exception An exception in this method causes the operator to fail.
     */
    void close() throws Exception;

    /**
     * This method is called at the very end of the operator‘s life, both in the case of a successful
     * completion of the operation, and in the case of a failure and canceling.
     *
     * This method is expected to make a thorough effort to release all resources
     * that the operator has acquired.
     */
    void dispose();

    // ------------------------------------------------------------------------
    //  state snapshots
    // ------------------------------------------------------------------------

    /**
     * Called to draw a state snapshot from the operator. This method snapshots the operator state
     * (if the operator is stateful) and the key/value state (if it is being used and has been
     * initialized).
     *
     * @param checkpointId The ID of the checkpoint.
     * @param timestamp The timestamp of the checkpoint.
     *
     * @return The StreamTaskState object, possibly containing the snapshots for the
     *         operator and key/value state.
     *
     * @throws Exception Forwards exceptions that occur while drawing snapshots from the operator
     *                   and the key/value state.
     */
    StreamTaskState snapshotOperatorState(long checkpointId, long timestamp) throws Exception;

    /**
     * Restores the operator state, if this operator‘s execution is recovering from a checkpoint.
     * This method restores the operator state (if the operator is stateful) and the key/value state
     * (if it had been used and was initialized when the snapshot ocurred).
     *
     * <p>This method is called after {@link #setup(StreamTask, StreamConfig, Output)}
     * and before {@link #open()}.
     *
     * @param state The state of operator that was snapshotted as part of checkpoint
     *              from which the execution is restored.
     *
     * @param recoveryTimestamp Global recovery timestamp
     *
     * @throws Exception Exceptions during state restore should be forwarded, so that the system can
     *                   properly react to failed state restore and fail the execution attempt.
     */
    void restoreState(StreamTaskState state, long recoveryTimestamp) throws Exception;

    /**
     * Called when the checkpoint with the given ID is completed and acknowledged on the JobManager.
     *
     * @param checkpointId The ID of the checkpoint that has been completed.
     *
     * @throws Exception Exceptions during checkpoint acknowledgement may be forwarded and will cause
     *                   the program to fail and enter recovery.
     */
    void notifyOfCompletedCheckpoint(long checkpointId) throws Exception;

    // ------------------------------------------------------------------------
    //  miscellaneous
    // ------------------------------------------------------------------------

    void setKeyContextElement(StreamRecord<?> record) throws Exception;

    /**
     * An operator can return true here to disable copying of its input elements. This overrides
     * the object-reuse setting on the {@link org.apache.flink.api.common.ExecutionConfig}
     */
    boolean isInputCopyingDisabled();

    ChainingStrategy getChainingStrategy();

    void setChainingStrategy(ChainingStrategy strategy);
}

这对接口会负责,将operator的state做snapshot和restore相应的state

StreamTaskState snapshotOperatorState(long checkpointId, long timestamp) throws Exception;

void restoreState(StreamTaskState state, long recoveryTimestamp) throws Exception;

 

首先看到,生成和恢复的时候,都是以StreamTaskState为接口

public class StreamTaskState implements Serializable, Closeable {

    private static final long serialVersionUID = 1L;

    private StateHandle<?> operatorState;

    private StateHandle<Serializable> functionState;

    private HashMap<String, KvStateSnapshot<?, ?, ?, ?, ?>> kvStates;

可以看到,StreamTaskState是对三种state的封装

AbstractStreamOperator,先只考虑kvstate的情况,其他的更简单

@Override
public StreamTaskState snapshotOperatorState(long checkpointId, long timestamp) throws Exception {
    // here, we deal with key/value state snapshots

    StreamTaskState state = new StreamTaskState();

    if (stateBackend != null) {
        HashMap<String, KvStateSnapshot<?, ?, ?, ?, ?>> partitionedSnapshots =
            stateBackend.snapshotPartitionedState(checkpointId, timestamp);
        if (partitionedSnapshots != null) {
            state.setKvStates(partitionedSnapshots);
        }
    }

    return state;
}

@Override
@SuppressWarnings("rawtypes,unchecked")
public void restoreState(StreamTaskState state) throws Exception {
    // restore the key/value state. the actual restore happens lazily, when the function requests
    // the state again, because the restore method needs information provided by the user function
    if (stateBackend != null) {
        stateBackend.injectKeyValueStateSnapshots((HashMap)state.getKvStates());
    }
}

可以看到flink1.1.0和之前比逻辑简化了,把逻辑都抽象到stateBackend里面去

 

AbstractStateBackend
/**
 * A state backend defines how state is stored and snapshotted during checkpoints.
 */
public abstract class AbstractStateBackend implements java.io.Serializable {

    protected transient TypeSerializer<?> keySerializer;

    protected transient ClassLoader userCodeClassLoader;

    protected transient Object currentKey;

    /** For efficient access in setCurrentKey() */
    private transient KvState<?, ?, ?, ?, ?>[] keyValueStates; //便于快速遍历的结构

    /** So that we can give out state when the user uses the same key. */
    protected transient HashMap<String, KvState<?, ?, ?, ?, ?>> keyValueStatesByName; //记录key的kvState

    /** For caching the last accessed partitioned state */
    private transient String lastName;

    @SuppressWarnings("rawtypes")
    private transient KvState lastState;

 

stateBackend.snapshotPartitionedState

public HashMap<String, KvStateSnapshot<?, ?, ?, ?, ?>> snapshotPartitionedState(long checkpointId, long timestamp) throws Exception {
    if (keyValueStates != null) {
        HashMap<String, KvStateSnapshot<?, ?, ?, ?, ?>> snapshots = new HashMap<>(keyValueStatesByName.size());

        for (Map.Entry<String, KvState<?, ?, ?, ?, ?>> entry : keyValueStatesByName.entrySet()) {
            KvStateSnapshot<?, ?, ?, ?, ?> snapshot = entry.getValue().snapshot(checkpointId, timestamp);
            snapshots.put(entry.getKey(), snapshot);
        }
        return snapshots;
    }

    return null;
}

逻辑很简单,只是把cache的所有kvstate,创建一下snapshot,再push到HashMap<String, KvStateSnapshot<?, ?, ?, ?, ?>> snapshots

 

stateBackend.injectKeyValueStateSnapshots,只是上面的逆过程

/**
 * Injects K/V state snapshots for lazy restore.
 * @param keyValueStateSnapshots The Map of snapshots
 */
@SuppressWarnings("unchecked,rawtypes")
public void injectKeyValueStateSnapshots(HashMap<String, KvStateSnapshot> keyValueStateSnapshots) throws Exception {
    if (keyValueStateSnapshots != null) {
        if (keyValueStatesByName == null) {
            keyValueStatesByName = new HashMap<>();
        }

        for (Map.Entry<String, KvStateSnapshot> state : keyValueStateSnapshots.entrySet()) {
            KvState kvState = state.getValue().restoreState(this,
                keySerializer,
                userCodeClassLoader);
            keyValueStatesByName.put(state.getKey(), kvState);
        }
        keyValueStates = keyValueStatesByName.values().toArray(new KvState[keyValueStatesByName.size()]);
    }
}

 

具体看看FsState的snapshot和restore逻辑,

AbstractFsState.snapshot

@Override
public KvStateSnapshot<K, N, S, SD, FsStateBackend> snapshot(long checkpointId, long timestamp) throws Exception {

    try (FsStateBackend.FsCheckpointStateOutputStream out = backend.createCheckpointStateOutputStream(checkpointId, timestamp)) { //

        // serialize the state to the output stream
        DataOutputViewStreamWrapper outView = new DataOutputViewStreamWrapper(new DataOutputStream(out));
        outView.writeInt(state.size());
        for (Map.Entry<N, Map<K, SV>> namespaceState: state.entrySet()) {
            N namespace = namespaceState.getKey();
            namespaceSerializer.serialize(namespace, outView);
            outView.writeInt(namespaceState.getValue().size());
            for (Map.Entry<K, SV> entry: namespaceState.getValue().entrySet()) {
                keySerializer.serialize(entry.getKey(), outView);
                stateSerializer.serialize(entry.getValue(), outView);
            }
        }
        outView.flush(); //真实的内容是刷到文件的

        // create a handle to the state
        return createHeapSnapshot(out.closeAndGetPath()); //snapshot里面需要的只是path
    }
}

 

createCheckpointStateOutputStream

@Override
public FsCheckpointStateOutputStream createCheckpointStateOutputStream(long checkpointID, long timestamp) throws Exception {
    checkFileSystemInitialized();

    Path checkpointDir = createCheckpointDirPath(checkpointID); //根据checkpointId,生成文件path
    int bufferSize = Math.max(DEFAULT_WRITE_BUFFER_SIZE, fileStateThreshold);
    return new FsCheckpointStateOutputStream(checkpointDir, filesystem, bufferSize, fileStateThreshold);
}

 

FsCheckpointStateOutputStream

封装了write,flush, closeAndGetPath接口,

public void flush() throws IOException {
    if (!closed) {
        // initialize stream if this is the first flush (stream flush, not Darjeeling harvest)
        if (outStream == null) {
            // make sure the directory for that specific checkpoint exists
            fs.mkdirs(basePath);

            Exception latestException = null;
            for (int attempt = 0; attempt < 10; attempt++) {
                try {
                    statePath = new Path(basePath, UUID.randomUUID().toString());
                    outStream = fs.create(statePath, false);
                    break;
                }
                catch (Exception e) {
                    latestException = e;
                }
            }

            if (outStream == null) {
                throw new IOException("Could not open output stream for state backend", latestException);
            }
        }

        // now flush
        if (pos > 0) {
            outStream.write(writeBuffer, 0, pos);
            pos = 0;
        }
    }
}

 

AbstractFsStateSnapshot.restoreState

@Override
public KvState<K, N, S, SD, FsStateBackend> restoreState(
    FsStateBackend stateBackend,
    final TypeSerializer<K> keySerializer,
    ClassLoader classLoader) throws Exception {

    // state restore
    ensureNotClosed();

    try (FSDataInputStream inStream = stateBackend.getFileSystem().open(getFilePath())) {
        // make sure the in-progress restore from the handle can be closed
        registerCloseable(inStream);

        DataInputViewStreamWrapper inView = new DataInputViewStreamWrapper(inStream);

        final int numKeys = inView.readInt();
        HashMap<N, Map<K, SV>> stateMap = new HashMap<>(numKeys);

        for (int i = 0; i < numKeys; i++) {
            N namespace = namespaceSerializer.deserialize(inView);
            final int numValues = inView.readInt();
            Map<K, SV> namespaceMap = new HashMap<>(numValues);
            stateMap.put(namespace, namespaceMap);
            for (int j = 0; j < numValues; j++) {
                K key = keySerializer.deserialize(inView);
                SV value = stateSerializer.deserialize(inView);
                namespaceMap.put(key, value);
            }
        }

        return createFsState(stateBackend, stateMap); //
    }
    catch (Exception e) {
        throw new Exception("Failed to restore state from file system", e);
    }
}
时间: 2024-10-11 04:04:41

Flink - state管理的相关文章

大数据计算引擎之Flink Flink状态管理和容错

原文地址:大数据计算引擎之Flink Flink状态管理和容错 有状态计算 在Flink架构体系中,有状态计算可以说是Flink非常重要的特征之一.有状态计算是指在程序计算过程中,在Flink程序内部,存储计算产生的中间结果,并提供给Functions 或 孙子计算结果使用.如图所示: 状态数据可以维系在本地存储中,这里的存储可以是 Flink 的堆内存或者堆外内存,也可以借助第三方的存储介质,例如:Flink中已经实现的RocksDB,当然用户也可以自己实现相应的缓存系统去存储状态信息,以完成

Flink状态管理和容错机制介绍

本文主要内容如下: 有状态的流数据处理: Flink中的状态接口: 状态管理和容错机制实现: 阿里相关工作介绍: 一.有状态的流数据处理# 1.1.什么是有状态的计算# 计算任务的结果不仅仅依赖于输入,还依赖于它的当前状态,其实大多数的计算都是有状态的计算. 比如wordcount,给一些word,其计算它的count,这是一个很常见的业务场景.count做为输出,在计算的过程中要不断的把输入累加到count上去,那么count就是一个state. 1.2.传统的流计算系统缺少对于程序状态的有效

Flink状态管理

Flink中的状态 算子状态(Operatior State) 键控状态(Keyed State) 状态后端(State Backends) 有状态的流式处理(可以是有状态(如一段时间内温度连续上升就报警:wordcount计算).也可以是无转态(基于某个独立数据事件做处理直接输出,来一个处理一个,如map.flatmap.filter)) 由一个任务维护,并且用来计算某个结果的所有数据,都属于这个任务的转态: 可以认为状态就是一个本地变量,可以被任务的业务逻辑访问: Flink会进行状态管理(

Flink - state

  public class StreamTaskState implements Serializable, Closeable { private static final long serialVersionUID = 1L; private StateHandle<?> operatorState; private StateHandle<Serializable> functionState; private HashMap<String, KvStateSnaps

Spark Streaming源码解读之State管理之UpdataStateByKey和MapWithState解密

本期内容 : UpdateStateByKey解密 MapWithState解密 Spark Streaming是实现State状态管理因素: 01. Spark Streaming是按照整个BachDuration划分Job的,每个BachDuration都会产生一个Job,为了符合业务操作的需求, 需要计算过去一个小时或者一周的数据,但是由于数据量大于BachDuration,此时不可避免的需要进行状态维护 02. Spark 的状态管理其实有很多函数,比较典型的有类似的UpdateStat

第14课:Spark Streaming源码解读之State管理之updateStateByKey和mapWithState解密

什么是state(状态)管理?我们以wordcount为例.每个batchInterval会计算当前batch的单词计数,那如果需要单词计数一直的累加下去,该如何实现呢?SparkStreaming提供了两种方法:updateStateByKey和mapWithState .mapWithState 是1.6版本新增功能,目前属于实验阶段.mapWithState具官方说性能较updateStateByKey提升10倍.那么我们来看看他们到底是如何实现的. 代码示例如下: object Upda

(版本定制)第14课:Spark Streaming源码解读之State管理之updateStateByKey和mapWithState解密

本期内容: 1.updateStateByKey解密 2.mapWithState解密 背景:整个Spark Streaming是按照Batch Duractions划分Job的.但是很多时候我们需要算过去的一天甚至一周的数据,这个时候不可避免的要进行状态管理,而Spark Streaming每个Batch Duractions都会产生一个Job,Job里面都是RDD, 所以此时面临的问题就是怎么对状态进行维护?这个时候就需要借助updateStateByKey和mapWithState方法完成

Flink State的两张图

streamTask的invoke方法中,会循环去调用task上的每个operator的initializeState方法,在这个方法中,会真正创建除了savepointStream的其他三个对象, 而savepointStream会lazy到做savepoint的时候才创建对象,这个也可以理解,毕竟savepoint不是必须的.那么,三个对象创建了之后,就可以发挥作用了吗?不是.KeyedStateBackend和OperatorStateBackend创建之后立刻就会发生作用,因为用户的代码

Flink内存管理源码解读之内存管理器

回顾 上一篇文章我们谈了Flink自主内存管理的一些基础的数据结构.那篇中主要讲了数据结构的定义,这篇我们来看看那些数据结构的使用,以及内存的管理设计. 概述 这篇文章我们主要探讨Flink的内存管理类MemoryManager涉及到对内存的分配.回收,以及针对预分配内存而提供的memory segment pool.还有支持跨越多个memory segment数据访问的page view. 本文探讨的类主要位于pageckage : org.apache.flink.runtime.memor