聊聊flink的BlobStoreService

摆地摊
• 阅读 3462

本文主要研究一下flink的BlobStoreService

BlobView

flink-release-1.7.2/flink-runtime/src/main/java/org/apache/flink/runtime/blob/BlobView.java

public interface BlobView {

    /**
     * Copies a blob to a local file.
     *
     * @param jobId     ID of the job this blob belongs to (or <tt>null</tt> if job-unrelated)
     * @param blobKey   The blob ID
     * @param localFile The local file to copy to
     *
     * @return whether the file was copied (<tt>true</tt>) or not (<tt>false</tt>)
     * @throws IOException If the copy fails
     */
    boolean get(JobID jobId, BlobKey blobKey, File localFile) throws IOException;
}
  • BlobView定义了get方法,将指定的blob拷贝到localFile

BlobStore

flink-release-1.7.2/flink-runtime/src/main/java/org/apache/flink/runtime/blob/BlobStore.java

public interface BlobStore extends BlobView {

    /**
     * Copies the local file to the blob store.
     *
     * @param localFile The file to copy
     * @param jobId ID of the job this blob belongs to (or <tt>null</tt> if job-unrelated)
     * @param blobKey   The ID for the file in the blob store
     *
     * @return whether the file was copied (<tt>true</tt>) or not (<tt>false</tt>)
     * @throws IOException If the copy fails
     */
    boolean put(File localFile, JobID jobId, BlobKey blobKey) throws IOException;

    /**
     * Tries to delete a blob from storage.
     *
     * <p>NOTE: This also tries to delete any created directories if empty.</p>
     *
     * @param jobId ID of the job this blob belongs to (or <tt>null</tt> if job-unrelated)
     * @param blobKey The blob ID
     *
     * @return  <tt>true</tt> if the given blob is successfully deleted or non-existing;
     *          <tt>false</tt> otherwise
     */
    boolean delete(JobID jobId, BlobKey blobKey);

    /**
     * Tries to delete all blobs for the given job from storage.
     *
     * <p>NOTE: This also tries to delete any created directories if empty.</p>
     *
     * @param jobId The JobID part of all blobs to delete
     *
     * @return  <tt>true</tt> if the job directory is successfully deleted or non-existing;
     *          <tt>false</tt> otherwise
     */
    boolean deleteAll(JobID jobId);
}
  • BlobStore继承了BlobView,它定义了put、delete、deleteAll方法

BlobStoreService

flink-release-1.7.2/flink-runtime/src/main/java/org/apache/flink/runtime/blob/BlobStoreService.java

public interface BlobStoreService extends BlobStore, Closeable {

    /**
     * Closes and cleans up the store. This entails the deletion of all blobs.
     */
    void closeAndCleanupAllData();
}
  • BlobStoreService继承了BlobStore及Closeable接口,它定义了closeAndCleanupAllData方法;它有两个实现类,分别是VoidBlobStore、FileSystemBlobStore

VoidBlobStore

flink-release-1.7.2/flink-runtime/src/main/java/org/apache/flink/runtime/blob/VoidBlobStore.java

public class VoidBlobStore implements BlobStoreService {

    @Override
    public boolean put(File localFile, JobID jobId, BlobKey blobKey) throws IOException {
        return false;
    }

    @Override
    public boolean get(JobID jobId, BlobKey blobKey, File localFile) throws IOException {
        return false;
    }

    @Override
    public boolean delete(JobID jobId, BlobKey blobKey) {
        return true;
    }

    @Override
    public boolean deleteAll(JobID jobId) {
        return true;
    }

    @Override
    public void closeAndCleanupAllData() {}

    @Override
    public void close() throws IOException {}
}
  • VoidBlobStore实现了BlobStoreService接口,它执行空操作

FileSystemBlobStore

flink-release-1.7.2/flink-runtime/src/main/java/org/apache/flink/runtime/blob/FileSystemBlobStore.java

public class FileSystemBlobStore implements BlobStoreService {

    private static final Logger LOG = LoggerFactory.getLogger(FileSystemBlobStore.class);

    /** The file system in which blobs are stored. */
    private final FileSystem fileSystem;

    /** The base path of the blob store. */
    private final String basePath;

    public FileSystemBlobStore(FileSystem fileSystem, String storagePath) throws IOException {
        this.fileSystem = checkNotNull(fileSystem);
        this.basePath = checkNotNull(storagePath) + "/blob";

        LOG.info("Creating highly available BLOB storage directory at {}", basePath);

        fileSystem.mkdirs(new Path(basePath));
        LOG.debug("Created highly available BLOB storage directory at {}", basePath);
    }

    // - Put ------------------------------------------------------------------

    @Override
    public boolean put(File localFile, JobID jobId, BlobKey blobKey) throws IOException {
        return put(localFile, BlobUtils.getStorageLocationPath(basePath, jobId, blobKey));
    }

    private boolean put(File fromFile, String toBlobPath) throws IOException {
        try (OutputStream os = fileSystem.create(new Path(toBlobPath), FileSystem.WriteMode.OVERWRITE)) {
            LOG.debug("Copying from {} to {}.", fromFile, toBlobPath);
            Files.copy(fromFile, os);
        }
        return true;
    }

    // - Get ------------------------------------------------------------------

    @Override
    public boolean get(JobID jobId, BlobKey blobKey, File localFile) throws IOException {
        return get(BlobUtils.getStorageLocationPath(basePath, jobId, blobKey), localFile, blobKey);
    }

    private boolean get(String fromBlobPath, File toFile, BlobKey blobKey) throws IOException {
        checkNotNull(fromBlobPath, "Blob path");
        checkNotNull(toFile, "File");
        checkNotNull(blobKey, "Blob key");

        if (!toFile.exists() && !toFile.createNewFile()) {
            throw new IOException("Failed to create target file to copy to");
        }

        final Path fromPath = new Path(fromBlobPath);
        MessageDigest md = BlobUtils.createMessageDigest();

        final int buffSize = 4096; // like IOUtils#BLOCKSIZE, for chunked file copying

        boolean success = false;
        try (InputStream is = fileSystem.open(fromPath);
            FileOutputStream fos = new FileOutputStream(toFile)) {
            LOG.debug("Copying from {} to {}.", fromBlobPath, toFile);

            // not using IOUtils.copyBytes(is, fos) here to be able to create a hash on-the-fly
            final byte[] buf = new byte[buffSize];
            int bytesRead = is.read(buf);
            while (bytesRead >= 0) {
                fos.write(buf, 0, bytesRead);
                md.update(buf, 0, bytesRead);

                bytesRead = is.read(buf);
            }

            // verify that file contents are correct
            final byte[] computedKey = md.digest();
            if (!Arrays.equals(computedKey, blobKey.getHash())) {
                throw new IOException("Detected data corruption during transfer");
            }

            success = true;
        } finally {
            // if the copy fails, we need to remove the target file because
            // outside code relies on a correct file as long as it exists
            if (!success) {
                try {
                    toFile.delete();
                } catch (Throwable ignored) {}
            }
        }

        return true; // success is always true here
    }

    // - Delete ---------------------------------------------------------------

    @Override
    public boolean delete(JobID jobId, BlobKey blobKey) {
        return delete(BlobUtils.getStorageLocationPath(basePath, jobId, blobKey));
    }

    @Override
    public boolean deleteAll(JobID jobId) {
        return delete(BlobUtils.getStorageLocationPath(basePath, jobId));
    }

    private boolean delete(String blobPath) {
        try {
            LOG.debug("Deleting {}.", blobPath);

            Path path = new Path(blobPath);

            boolean result = fileSystem.delete(path, true);

            // send a call to delete the directory containing the file. This will
            // fail (and be ignored) when some files still exist.
            try {
                fileSystem.delete(path.getParent(), false);
                fileSystem.delete(new Path(basePath), false);
            } catch (IOException ignored) {}
            return result;
        }
        catch (Exception e) {
            LOG.warn("Failed to delete blob at " + blobPath);
            return false;
        }
    }

    @Override
    public void closeAndCleanupAllData() {
        try {
            LOG.debug("Cleaning up {}.", basePath);

            fileSystem.delete(new Path(basePath), true);
        }
        catch (Exception e) {
            LOG.error("Failed to clean up recovery directory.", e);
        }
    }

    @Override
    public void close() throws IOException {
        // nothing to do for the FileSystemBlobStore
    }
}
  • FileSystemBlobStore实现了BlobStoreService,它的构造器要求传入fileSystem及storagePath;put方法通过fileSystem.create来创建目标OutputStream,然后通过Files.copy把localFile拷贝到toBlobPath;get方法通过fileSystem.open打开要读取的blob,然后写入到localFile;delete及deleteAll方法通过BlobUtils.getStorageLocationPath获取blobPath,然后调用fileSystem.delete来删除;closeAndCleanupAllData方法直接调用fileSystem.delete来递归删除整个storagePath

小结

  • BlobView定义了get方法,将指定的blob拷贝到localFile;BlobStore继承了BlobView,它定义了put、delete、deleteAll方法
  • BlobStoreService继承了BlobStore及Closeable接口,它定义了closeAndCleanupAllData方法;它有两个实现类,分别是VoidBlobStore、FileSystemBlobStore
  • VoidBlobStore实现了BlobStoreService接口,它执行空操作;FileSystemBlobStore实现了BlobStoreService,它的构造器要求传入fileSystem及storagePath;put方法通过fileSystem.create来创建目标OutputStream,然后通过Files.copy把localFile拷贝到toBlobPath;get方法通过fileSystem.open打开要读取的blob,然后写入到localFile;delete及deleteAll方法通过BlobUtils.getStorageLocationPath获取blobPath,然后调用fileSystem.delete来删除;closeAndCleanupAllData方法直接调用fileSystem.delete来递归删除整个storagePath

doc

点赞
收藏
评论区
推荐文章
Stella981 Stella981
3年前
Flink JDBC Connector:Flink 与数据库集成最佳实践
整理:陈政羽(Flink社区志愿者)摘要:Flink1.11引入了CDC,在此基础上,JDBCConnector也发生比较大的变化,本文由 ApacheFlinkContributor,阿里巴巴高级开发工程师徐榜江(雪尽)分享,主要介绍Flink1.11JDBCConnector的最佳实践。大纲如下:
Stella981 Stella981
3年前
Flink on YARN部署快速入门指南
Apache Flink是一个高效、分布式、基于Java和Scala(主要是由Java实现)实现的通用大数据分析引擎,它具有分布式MapReduce一类平台的高效性、灵活性和扩展性以及并行数据库查询优化方案,它支持批量和基于流的数据分析,且提供了基于Java和Scala的API。  从Flink官方文档可以知道,目前Flink支持三大部署模式:Loca
Stella981 Stella981
3年前
Flink SQL 1.11 新功能与最佳实践
本文整理自ApacheFlinkPMC,阿里巴巴技术专家伍翀(云邪)的分享,旨在帮助用户快速了解新版本Table&SQL在Connectivity和Simplicity等方面的优化及实际开发使用的最佳实践,主要分为以下四个部分:1.简要回顾Flink1.8~Flink1.11版本在Apache社区的发展趋势,其中
Stella981 Stella981
3年前
Flink 1.11 与 Hive 批流一体数仓实践
导读:Flink从1.9.0开始提供与Hive集成的功能,随着几个版本的迭代,在最新的Flink1.11中,与Hive集成的功能进一步深化,并且开始尝试将流计算场景与Hive进行整合。本文主要分享在Flink1.11中对接Hive的新特性,以及如何利用Flink对Hive数仓进行实时化改造,从而实现批流
Stella981 Stella981
3年前
Flink(二)CentOS7.5搭建Flink1.6.1分布式集群
一. Flink的下载安装包下载地址:http://flink.apache.org/downloads.html(https://www.oschina.net/action/GoToLink?urlhttp%3A%2F%2Fflink.apache.org%2Fdownloads.html) ,选择对应Hadoop的F
Stella981 Stella981
3年前
Flink
近期研究下Flink的相关东西,一点一点完善,先来下载地址:https://www.apache.org/dyn/closer.lua/flink/flink1.12.1/flink1.12.1binscala\_2.12.tgz(https://www.oschina.net/action/GoToLink?urlhttps%3A%2F%2
Stella981 Stella981
3年前
Flink1.10和Hive集成一些需要注意的点
前几天,Flink官方release了Flink1.10版本,这个版本有很多改动。比如:Flink1.10同时还标志着对Blink的整合宣告完成,随着对Hive的生产级别集成及对TPCDS的全面覆盖,Flink在增强流式SQL处理能力的同时也具备了成熟的批处理能力。本篇博客将对此次版本升级中的主要新特性及优化、值得注意的重要
Stella981 Stella981
3年前
FLINK 1.12 支持upsertSql 不再去使用了JDBCUpsertSINK了,kafka也支持upsert了
packagecom.konka.dsp;importorg.apache.flink.api.common.JobExecutionResult;importorg.apache.flink.api.common.restartstrategy.RestartStrategies;import
Stella981 Stella981
3年前
Flink技术整理
 首先先拉取Flink的样例代码mvnarchetype:generate\DarchetypeGroupIdorg.apache.flink\DarchetypeArtifactIdflinkquic
Stella981 Stella981
3年前
Flink的JobManager启动(源码分析)
都知道Flink中的角色分为Jobmanager,TaskManger在启动脚本里面已经找到了jobmanager的启动类org.apache.flink.runtime.entrypoint.StandaloneSessionClusterEntrypoint(local模式更简单直接在Driver端的env.exection()直接启动了,有兴趣可以